983 resultados para tag
Resumo:
A wireless fuel quantity indication system (FQIS) has been developed using an RFID-enabled sensing platform. The system comprises a fully passive tag, modified reader protocol, capacitive fuel probe, and auxiliary antenna for additional energy harvesting. Results of fluid testing show sensitivity to changes in fluid height of less than 0.25in. An RF-DC harvesting circuit was developed, which delivers up to 5dBm of input power through a remote radio frequency (RF) source. Testing was conducted in a loaded reverberation chamber to emulate the fuel tank environment. Results demonstrate feasibility of the remote source to power the sensor with less than 1W of maximum transmit power and under 100ms dwell time (100mW average power) into the tank. This indicates adequate coverage for large transport aircraft at safe operating levels with a sample rate of up to 1 sample/s.
Resumo:
There are multiple goals of a technology transfer office (TTO) based in a university system. Whilst commercialization is a critical goal, maintenance and cleaning of the TTO's database needs detailing. Literature in the area is scarce and only some researchers make reference to TTO data cleaning. During an attempt to understand the commercial strategy of a university TTO in Bangalore the challenge of data cleaning was encountered. This paper describes a case study of data cleaning at an Indian university based TTO. 382 patent records were analyzed in the study. The case study first describes the back ground of the university system. Second, the method to clean the data and the experiences encountered are highlighted. Insights drawn indicate that patent data cleaning in a TTO is a specialized area which needs attention. Overlooking this activity can have legal implications and may result in an inability to commercialize the patent. Two levels of patent data cleaning are discussed in this case study. Best practices of data cleaning in academic TTOs are discussed.
Resumo:
The viral phenomenon has garnered a great deal of attention in the recent years. Although evidence of viral success exists the underlying factors leading to the phenomenon and its measurement still remains a grey area which needs to be explored. The viral phenomenon for a product or information and its distinction based on growth curve trajectory has not been rigorously explored in the previous works. This paper aims to understand the viral phenomenon that makes products or information go viral. The viral phenomenon trajectories that distinguish the viral from a non-viral phenomenon are demonstrated. The curve fitting methodology for viral phenomenon is adopted which has not been looked into in the previous works. TED talks are analyzed to understand the diffusion pattern, essentially one or more spike, within a time period. Insights drawn indicate the characteristic viral growth trajectories and its implication on innovation.
Resumo:
Lightning strike to instrumented and communication towers can be a source of electromagnetic disturbance to the system connected. Long cables running on these towers can get significant induction to their sheath/core, which would then couple to the connected equipments. For a quantitative analysis of the situation, suitable theoretical analysis is necessary. Due to the dominance of the transverse magnetic mode during the fast rising portion of the stroke current, which is the period of significant induction, a full wave solution based on Maxwell's equations is necessary. Owing to the large geometric aspect ratio of tower lattice elements and for feasibility of a numerical solution, the thin-wire formulation for the electric field integral equation is generally adopted. However, the classical thin-wire formulation is not set for handling non-cylindrical conductors like tower lattice elements and the proximity of other conductors. The present work investigates further into a recently proposed method for handling such a situation and optimizes the numerical solution approach.
Resumo:
This paper presents a comprehensive and robust strategy for the estimation of battery model parameters from noise corrupted data. The deficiencies of the existing methods for parameter estimation are studied and the proposed parameter estimation strategy improves on earlier methods by working optimally for low as well as high discharge currents, providing accurate estimates even under high levels of noise, and with a wide range of initial values. Testing on different data sets confirms the performance of the proposed parameter estimation strategy.
Resumo:
In this paper we present HyperCell as a reconfigurable datapath for Instruction Extensions (IEs). HyperCell comprises an array of compute units laid over a switch network. We present an IE synthesis methodology that enables post-silicon realization of IE datapaths on HyperCell. The synthesis methodology optimally exploits hardware resources in HyperCell to enable software pipelined execution of IEs. Exploitation of temporal reuse of data in HyperCell results in significant reduction of input/output bandwidth requirements of HyperCell.
Resumo:
In this paper, we study two multi-dimensional Goodness-of-Fit tests for spectrum sensing in cognitive radios. The multi-dimensional scenario refers to multiple CR nodes, each with multiple antennas, that record multiple observations from multiple primary users for spectrum sensing. These tests, viz., the Interpoint Distance (ID) based test and the h, f distance based tests are constructed based on the properties of stochastic distances. The ID test is studied in detail for a single CR node case, and a possible extension to handle multiple nodes is discussed. On the other hand, the h, f test is applicable in a multi-node setup. A robustness feature of the KL distance based test is discussed, which has connections with Middleton's class A model. Through Monte-Carlo simulations, the proposed tests are shown to outperform the existing techniques such as the eigenvalue ratio based test, John's test, and the sphericity test, in several scenarios.
Resumo:
This paper proposes a denoising algorithm which performs non-local means bilateral filtering. As existing literature suggests, non-local means (NLM) is one of the widely used denoising techniques, but has a critical drawback of smoothing of edges. In order to improve this, we perform fast and efficient NLM using Approximate Nearest Neighbour Fields and improve the edge content in denoising by formulating a joint-bilateral filter. Using the proposed joint bilateral, we are able to denoise smooth regions using the NLM approach and efficient edge reconstruction is obtained from the bilateral filter. Furthermore, to avoid tedious parameter selection, we carry out a noise estimation before performing joint bilateral filtering. The proposed approach is observed to perform well on high noise images.
Resumo:
We propose data acquisition from continuous-time signals belonging to the class of real-valued trigonometric polynomials using an event-triggered sampling paradigm. The sampling schemes proposed are: level crossing (LC), close to extrema LC, and extrema sampling. Analysis of robustness of these schemes to jitter, and bandpass additive gaussian noise is presented. In general these sampling schemes will result in non-uniformly spaced sample instants. We address the issue of signal reconstruction from the acquired data-set by imposing structure of sparsity on the signal model to circumvent the problem of gap and density constraints. The recovery performance is contrasted amongst the various schemes and with random sampling scheme. In the proposed approach, both sampling and reconstruction are non-linear operations, and in contrast to random sampling methodologies proposed in compressive sensing these techniques may be implemented in practice with low-power circuitry.
Resumo:
In this paper, we present Bi-Modal Cache - a flexible stacked DRAM cache organization which simultaneously achieves several objectives: (i) improved cache hit ratio, (ii) moving the tag storage overhead to DRAM, (iii) lower cache hit latency than tags-in-SRAM, and (iv) reduction in off-chip bandwidth wastage. The Bi-Modal Cache addresses the miss rate versus off-chip bandwidth dilemma by organizing the data in a bi-modal fashion - blocks with high spatial locality are organized as large blocks and those with little spatial locality as small blocks. By adaptively selecting the right granularity of storage for individual blocks at run-time, the proposed DRAM cache organization is able to make judicious use of the available DRAM cache capacity as well as reduce the off-chip memory bandwidth consumption. The Bi-Modal Cache improves cache hit latency despite moving the metadata to DRAM by means of a small SRAM based Way Locator. Further by leveraging the tremendous internal bandwidth and capacity that stacked DRAM organizations provide, the Bi-Modal Cache enables efficient concurrent accesses to tags and data to reduce hit time. Through detailed simulations, we demonstrate that the Bi-Modal Cache achieves overall performance improvement (in terms of Average Normalized Turnaround Time (ANTT)) of 10.8%, 13.8% and 14.0% in 4-core, 8-core and 16-core workloads respectively.
Resumo:
Noise-predictive maximum likelihood (NPML) is a well known signal detection technique used in partial response maximum likelihood (PRML) scheme in 1D magnetic recording channels. The noise samples colored by the partial response (PR) equalizer are predicted/ whitened during the signal detection using a Viterbi detector. In this paper, we propose an extension of the NPML technique for signal detection in 2D ISI channels. The impact of noise prediction during signal detection is studied in PRML scheme for a particular choice of 2D ISI channel and PR targets.
Resumo:
The problem of continuous curvature path planning for passages is considered. This problem arises when an autonomous vehicle traverses between prescribed boundaries such as corridors, tunnels, channels, etc. Passage boundaries with curvature and heading discontinuities pose challenges for generating smooth paths passing through them. Continuous curvature half-S shaped paths derived from the Four Parameter Logistic Curve family are proposed as a prospective path planning solution. Analytic conditions are derived for generating continuous curvature paths confined within the passage boundaries. Zero end curvature highlights the scalability of the proposed solution and its compatibility with other path planners in terms of larger path planning domains. Various scenarios with curvature and heading discontinuities are considered presenting viability of the proposed solution.
Resumo:
Cooperative relaying combined with selection exploits spatial diversity to significantly improve the performance of interference-constrained secondary users in an underlay cognitive radio network. We present a novel and optimal relay selection (RS) rule that minimizes the symbol error probability (SEP) of an average interference-constrained underlay secondary system that uses amplify-and-forward relays. A key point that the rule highlights for the first time is that, for the average interference constraint, the signal-to-interference-plus-noise-ratio (SINR) of the direct source-to-destination (SI)) link affects the choice of the optimal relay. Furthermore, as the SINR increases, the odds that no relay transmits increase. We also propose a simpler, more practical, and near-optimal variant of the optimal rule that requires just one bit of feedback about the state of the SD link to the relays. Compared to the SD-unaware ad hoc RS rules proposed in the literature, the proposed rules markedly reduce the SEP by up to two orders of magnitude.
Resumo:
The utility of canonical correlation analysis (CCA) for domain adaptation (DA) in the context of multi-view head pose estimation is examined in this work. We consider the three problems studied in 1], where different DA approaches are explored to transfer head pose-related knowledge from an extensively labeled source dataset to a sparsely labeled target set, whose attributes are vastly different from the source. CCA is found to benefit DA for all the three problems, and the use of a covariance profile-based diagonality score (DS) also improves classification performance with respect to a nearest neighbor (NN) classifier.
Resumo:
Differently from previous studies of tag-based cooperation, we assume that individuals fail to recognize their own tag. Due to such incomplete information, the action taken against the opponent cannot be based on similarity, although it is still motivated by the tag displayed by the opponent. We present stability conditions for the case when individuals play unconditional cooperation, unconditional defection or conditional cooperation. We then consider the removal of one or two strategies. Results show that conditional cooperators are the most resilient agents against extinction and that the removal of unconditional cooperators may lead to the extinction of unconditional defectors.