916 resultados para Map-matching
Resumo:
Animals communicate in non-ideal and noisy conditions. The primary method they use to improve communication efficiency is sender-receiver matching: the receiver's sensory mechanism filters the impinging signal based on the expected signal. In the context of acoustic communication in crickets, such a match is made in the frequency domain. The males broadcast a mate attraction signal, the calling song, in a narrow frequency band centred on the carrier frequency (CF), and the females are most sensitive to sound close to this frequency. In tree crickets, however, the CF changes with temperature. The mechanisms used by female tree crickets to accommodate this change in CF were investigated at the behavioural and biomechanical level. At the behavioural level, female tree crickets were broadly tuned and responded equally to CFs produced within the naturally occurring range of temperatures (18 to 27 degrees C). To allow such a broad response, however, the transduction mechanisms that convert sound into mechanical and then neural signals must also have a broad response. The tympana of the female tree crickets exhibited a frequency response that was even broader than suggested by the behaviour. Their tympana vibrate with equal amplitude to frequencies spanning nearly an order of magnitude. Such a flat frequency response is unusual in biological systems and cannot be modelled as a simple mechanical system. This feature of the tree cricket auditory system not only has interesting implications for mate choice and species isolation but may also prove exciting for bio-mimetic applications such as the design of miniature low frequency microphones.
Resumo:
We have investigated the local electronic properties and the spatially resolved magnetoresistance of a nanostructured film of a colossal magnetoresistive (CMR) material by local conductance mapping (LCMAP) using a variable temperature Scanning Tunneling Microscope (STM) operating in a magnetic field. The nanostructured thin films (thickness ≈500nm) of the CMR material La0.67Sr0.33MnO3 (LSMO) on quartz substrates were prepared using chemical solution deposition (CSD) process. The CSD grown films were imaged by both STM and atomic force microscopy (AFM). Due to the presence of a large number of grain boundaries (GB's), these films show low field magnetoresistance (LFMR) which increases at lower temperatures. The measurement of spatially resolved electronic properties reveal the extent of variation of the density of states (DOS) at and close to the Fermi level (EF) across the grain boundaries and its role in the electrical resistance of the GB. Measurement of the local conductance maps (LCMAP) as a function of magnetic field as well as temperature reveals that the LFMR occurs at the GB. While it was known that LFMR in CMR films originates from the GB, this is the first investigation that maps the local electronic properties at a GB in a magnetic field and traces the origin of LFMR at the GB.
Resumo:
Processing maps have been developed for hot deformation of Mg-2Zn-1Mn alloy in as-cast condition and after homogenization with a view to evaluate the influence of homogenization. Hot compression data in the temperature range 300-500degreesC and strain rate range 0.001-100 s(-1) were used for generating the processing map. In the map for the as-cast alloy the domain of dynamic recrystallization occurring, at 450degreesC and 0.1 s(-1) has merged with another domain occurring at 500degreesC and 0.001 s(-1) representing grain boundary cracking. The latter domain is eliminated by homogenization and the dynamic recrystallization domain expanded with a higher peak efficiency occurring at 500 degreesC and 0.05 s(-1). The flow localization occurring at strain rates higher than 5 s(-1) is unaffected by homogenization.
Resumo:
Regular Expressions are generic representations for a string or a collection of strings. This paper focuses on implementation of a regular expression matching architecture on reconfigurable fabric like FPGA. We present a Nondeterministic Finite Automata based implementation with extended regular expression syntax set compared to previous approaches. We also describe a dynamically reconfigurable generic block that implements the supported regular expression syntax. This enables formation of the regular expression hardware by a simple cascade of generic blocks as well as a possibility for reconfiguring the generic blocks to change the regular expression being matched. Further,we have developed an HDL code generator to obtain the VHDL description of the hardware for any regular expression set. Our optimized regular expression engine achieves a throughput of 2.45 Gbps. Our dynamically reconfigurable regular expression engine achieves a throughput of 0.8 Gbps using 12 FPGA slices per generic block on Xilinx Virtex2Pro FPGA.
Resumo:
Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.
Resumo:
This study presents the future seismic hazard map of Coimbatore city, India, by considering rupture phenomenon. Seismotectonic map for Coimbatore has been generated using past earthquakes and seismic sources within 300 km radius around the city. The region experienced a largest earthquake of moment magnitude 6.3 in 1900. Available earthquakes are divided into two categories: one includes events having moment magnitude of 5.0 and above, i.e., damaging earthquakes in the region and the other includes the remaining, i.e., minor earthquakes. Subsurface rupture character of the region has been established by considering the damaging earthquakes and total length of seismic source. Magnitudes of each source are estimated by assuming the subsurface rupture length in terms of percentage of total length of sources and matched with reported earthquake. Estimated magnitudes match well with the reported earthquakes for a RLD of 5.2% of the total length of source. Zone of influence circles is also marked in the seismotectonic map by considering subsurface rupture length of fault associated with these earthquakes. As earthquakes relive strain energy that builds up on faults, it is assumed that all the earthquakes close to damaging earthquake have released the entire strain energy and it would take some time for the rebuilding of strain energy to cause a similar earthquake in the same location/fault. Area free from influence circles has potential for future earthquake, if there is seismogenic source and minor earthquake in the last 20 years. Based on this rupture phenomenon, eight probable locations have been identified and these locations might have the potential for the future earthquakes. Characteristic earthquake moment magnitude (M-w) of 6.4 is estimated for the seismic study area considering seismic sources close to probable zones and 15% increased regional rupture character. The city is divided into several grid points at spacing of 0.01 degrees and the peak ground acceleration (PGA) due to each probable earthquake is calculated at every grid point in city by using the regional attenuation model. The maximum of all these eight PGAs is taken for each grid point and the final PGA map is arrived. This map is compared to the PGA map developed based on the conventional deterministic seismic hazard analysis (DSHA) approach. The probable future rupture earthquakes gave less PGA than that of DSHA approach. The occurrence of any earthquake may be expected in near future in these eight zones, as these eight places have been experiencing minor earthquakes and are located in well-defined seismogenic sources.
Resumo:
Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]
Resumo:
Comments constitute an important part of Web 2.0. In this paper, we consider comments on news articles. To simplify the task of relating the comment content to the article content the comments are about, we propose the idea of showing comments alongside article segments and explore automatic mapping of comments to article segments. This task is challenging because of the vocabulary mismatch between the articles and the comments. We present supervised and unsupervised techniques for aligning comments to segments the of article the comments are about. More specifically, we provide a novel formulation of supervised alignment problem using the framework of structured classification. Our experimental results show that structured classification model performs better than unsupervised matching and binary classification model.
Resumo:
Network Intrusion Detection Systems (NIDS) intercept the traffic at an organization's network periphery to thwart intrusion attempts. Signature-based NIDS compares the intercepted packets against its database of known vulnerabilities and malware signatures to detect such cyber attacks. These signatures are represented using Regular Expressions (REs) and strings. Regular Expressions, because of their higher expressive power, are preferred over simple strings to write these signatures. We present Cascaded Automata Architecture to perform memory efficient Regular Expression pattern matching using existing string matching solutions. The proposed architecture performs two stage Regular Expression pattern matching. We replace the substring and character class components of the Regular Expression with new symbols. We address the challenges involved in this approach. We augment the Word-based Automata, obtained from the re-written Regular Expressions, with counter-based states and length bound transitions to perform Regular Expression pattern matching. We evaluated our architecture on Regular Expressions taken from Snort rulesets. We were able to reduce the number of automata states between 50% to 85%. Additionally, we could reduce the number of transitions by a factor of 3 leading to further reduction in the memory requirements.
Resumo:
This paper investigates a new approach for point matching in multi-sensor satellite images. The feature points are matched using multi-objective optimization (angle criterion and distance condition) based on Genetic Algorithm (GA). This optimization process is more efficient as it considers both the angle criterion and distance condition to incorporate multi-objective switching in the fitness function. This optimization process helps in matching three corresponding corner points detected in the reference and sensed image and thereby using the affine transformation, the sensed image is aligned with the reference image. From the results obtained, the performance of the image registration is evaluated and it is concluded that the proposed approach is efficient.
Resumo:
The inverse problem in photoacoustic tomography (PAT) seeks to obtain the absorbed energy map from the boundary pressure measurements for which computationally intensive iterative algorithms exist. The computational challenge is heightened when the reconstruction is done using boundary data split into its frequency spectrum to improve source localization and conditioning of the inverse problem. The key idea of this work is to modify the update equation wherein the Jacobian and the perturbation in data are summed over all wave numbers, k, and inverted only once to recover the absorbed energy map. This leads to a considerable reduction in the overall computation time. The results obtained using simulated data, demonstrates the efficiency of the proposed scheme without compromising the accuracy of reconstruction.
Resumo:
Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.
Resumo:
Orthogonal Matching Pursuit (OMP) is a popular greedy pursuit algorithm widely used for sparse signal recovery from an undersampled measurement system. However, one of the main shortcomings of OMP is its irreversible selection procedure of columns of measurement matrix. i.e., OMP does not allow removal of the columns wrongly estimated in any of the previous iterations. In this paper, we propose a modification in OMP, using the well known Subspace Pursuit (SP), to refine the subspace estimated by OMP at any iteration and hence boost the sparse signal recovery performance of OMP. Using simulations we show that the proposed scheme improves the performance of OMP in clean and noisy measurement cases.