983 resultados para ARiBo tag
Resumo:
In general the objective of accurately encoding the input data and the objective of extracting good features to facilitate classification are not consistent with each other. As a result, good encoding methods may not be effective mechanisms for classification. In this paper, an earlier proposed unsupervised feature extraction mechanism for pattern classification has been extended to obtain an invertible map. The method of bimodal projection-based features was inspired by the general class of methods called projection pursuit. The principle of projection pursuit concentrates on projections that discriminate between clusters and not faithful representations. The basic feature map obtained by the method of bimodal projections has been extended to overcome this. The extended feature map is an embedding of the input space in the feature space. As a result, the inverse map exists and hence the representation of the input space in the feature space is exact. This map can be naturally expressed as a feedforward neural network.
Resumo:
In this paper, we address the reconstruction problem from laterally truncated helical cone-beam projections. The reconstruction problem from lateral truncation, though similar to that of interior radon problem, is slightly different from it as well as the local (lambda) tomography and pseudo-local tomography in the sense that we aim to reconstruct the entire object being scanned from a region-of-interest (ROI) scan data. The method proposed in this paper is a projection data completion approach followed by the use of any standard accurate FBP type reconstruction algorithm. In particular, we explore a windowed linear prediction (WLP) approach for data completion and compare the quality of reconstruction with the linear prediction (LP) technique proposed earlier.
Resumo:
In this paper, the development of a novel multipoint pressure sensor system suitable for the measurement of human foot pressure distribution has been presented. It essentially consists of a matrix of cantilever sensing elements supported by beams. Foil type strain gauges have been employed for the conversion of foot pressure in to proportional electrical response. Information on the signal conditioning circuitry used is given. Also, the results obtained on the performance of the system are included.
Resumo:
A built-in-self-test (BIST) subsystem embedded in a 65-nm mobile broadcast video receiver is described. The subsystem is designed to perform analog and RF measurements at multiple internal nodes of the receiver. It uses a distributed network of CMOS sensors and a low bandwidth, 12-bit A/D converter to perform the measurements with a serial bus interface enabling a digital transfer of measured data to automatic test equipment (ATE). A perturbation/correlation based BIST method is described, which makes pass/fail determination on parts, resulting in significant test time and cost reduction.
Resumo:
Active-clamp dc-dc converters are pulsewidth-modulated converters having two switches featuring zero-voltage switching at frequencies beyond 100 kHz. Generalized equivalent circuits valid for steady-state and dynamic performance have been proposed for the family of active-clamp converters. The active-clamp converter is analyzed for its dynamic behavior under current control in this paper. The steady-state stability analysis is presented. On account of the lossless damping inherent in the active-clamp converters, it appears that the stability region in the current-controlled active-clamp converters get extended for duty ratios, a little greater than 0.5 unlike in conventional hard-switched converters. The conventional graphical approach fails to assess the stability of current-controlled active-clamp converters, due to the coupling between the filter inductor current and resonant inductor current. An analysis that takes into account the presence of the resonant elements is presented to establish the condition for stability. This method correctly predicts the stability of the current-controlled active-clamp converters. A simple expression for the maximum duty cycle for subharmonic-free operation is obtained. The results are verified experimentally.
Resumo:
This paper addresses a search problem with multiple limited capability search agents in a partially connected dynamical networked environment under different information structures. A self assessment-based decision-making scheme for multiple agents is proposed that uses a modified negotiation scheme with low communication overheads. The scheme has attractive features of fast decision-making and scalability to large number of agents without increasing the complexity of the algorithm. Two models of the self assessment schemes are developed to study the effect of increase in information exchange during decision-making. Some analytical results on the maximum number of self assessment cycles, effect of increasing communication range, completeness of the algorithm, lower bound and upper bound on the search time are also obtained. The performance of the various self assessment schemes in terms of total uncertainty reduction in the search region, using different information structures is studied. It is shown that the communication requirement for self assessment scheme is almost half of the negotiation schemes and its performance is close to the optimal solution. Comparisons with different sequential search schemes are also carried out. Note to Practitioners-In the futuristic military and civilian applications such as search and rescue, surveillance, patrol, oil spill, etc., a swarm of UAVs can be deployed to carry out the mission for information collection. These UAVs have limited sensor and communication ranges. In order to enhance the performance of the mission and to complete the mission quickly, cooperation between UAVs is important. Designing cooperative search strategies for multiple UAVs with these constraints is a difficult task. Apart from this, another requirement in the hostile territory is to minimize communication while making decisions. This adds further complexity to the decision-making algorithms. In this paper, a self-assessment-based decision-making scheme, for multiple UAVs performing a search mission, is proposed. The agents make their decisions based on the information acquired through their sensors and by cooperation with neighbors. The complexity of the decision-making scheme is very low. It can arrive at decisions fast with low communication overheads, while accommodating various information structures used for increasing the fidelity of the uncertainty maps. Theoretical results proving completeness of the algorithm and the lower and upper bounds on the search time are also provided.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.
Resumo:
Experiments were conducted to measure the ac breakdown strength of epoxy alumina nanocomposites with different filler loadings of 0.1, 1 and 5 wt%. The experiments were performed as per the ASTM D 149 standard on samples of thickness 0.5 mm, 1 mm and 3 mm in order to study the effect of thickness on the ac breakdown strength of epoxy nanocomposites. In the case of epoxy alumina nanocomposites it was observed that the ac breakdown strength was marginally lower for 0.1 wt% and 1 wt% filler loadings and then increased at 5 wt% filler loading as compared to the unfilled epoxy. The Weibull shape parameter (beta) increased with the addition of nanoparticles to epoxy as well as with the increasing sample thickness for all the filler loadings considered. DSC analysis was done to study the material properties at the filler resin interface in order to understand the effect of the filler loading and thereby the influence of the interface on the ac breakdown strength of epoxy nanocomposites. It was also observed that the decrease in ac electric breakdown strength with an increase in sample thickness follows an inverse power-law dependence. In addition, the ac breakdown strength of epoxy silica nanocomposites have also been studied in order to understand the influence of the filler type on the breakdown strength.
Resumo:
For an n(t) transmit, n(r) receive antenna system (n(t) x n(r) system), a full-rate space time block code (STBC) transmits at least n(min) = min(n(t), n(r))complex symbols per channel use. The well-known Golden code is an example of a full-rate, full-diversity STBC for two transmit antennas. Its ML-decoding complexity is of the order of M(2.5) for square M-QAM. The Silver code for two transmit antennas has all the desirable properties of the Golden code except its coding gain, but offers lower ML-decoding complexity of the order of M(2). Importantly, the slight loss in coding gain is negligible compared to the advantage it offers in terms of lowering the ML-decoding complexity. For higher number of transmit antennas, the best known codes are the Perfect codes, which are full-rate, full-diversity, information lossless codes (for n(r) >= n(t)) but have a high ML-decoding complexity of the order of M(ntnmin) (for n(r) < n(t), the punctured Perfect codes are considered). In this paper, a scheme to obtain full-rate STBCs for 2(a) transmit antennas and any n(r) with reduced ML-decoding complexity of the order of M(nt)(n(min)-3/4)-0.5 is presented. The codes constructed are also information lossless for >= n(t), like the Perfect codes, and allow higher mutual information than the comparable punctured Perfect codes for n(r) < n(t). These codes are referred to as the generalized Silver codes, since they enjoy the same desirable properties as the comparable Perfect codes (except possibly the coding gain) with lower ML-decoding complexity, analogous to the Silver code and the Golden code for two transmit antennas. Simulation results of the symbol error rates for four and eight transmit antennas show that the generalized Silver codes match the punctured Perfect codes in error performance while offering lower ML-decoding complexity.
Resumo:
Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. We obtain simple necessary and sufficient conditions on prices such that efficient and fair link sharing is possible. We illustrate the ideas using a simple example
Resumo:
Benzocyclobutene (BCB) has been proposed as a board level dielectric for advanced system-on-package (SOP) module primarily due to its attractive low-loss (for RF application) and thin film (for high density wiring) properties. Realization of embedded resistors on low loss benzocyclobutene (dielectric loss ~0.0008 at > 40 GHz) has been explored in this study. Two approaches, viz, foil transfer and electroless plating have been attempted for deposition of thin film resistors on benzocyclobutene (BCB). Ni-P alloys were plated using conventional electroless plating, and NiCr and NiCrAlSi foils were used for the foil transfer process. This paper reports NiP and NiWP electroless plated embedded resistors on BCB dielectric for the first time in the literature
Resumo:
Access control is an important component in the security of communication systems. While cryptography has rightfully been a significant component in the design of large scale communication systems, its relation to access control, especially its complementarity, has not often been brought out in full. With the wide availability of SELinux, a comprehensive model of access control has all the more become important. In many large scale systems, access control and trust management have become important components in the design. In survivable systems, models of group communication systems may have to be integrated with access control models. In this paper, we discuss the problem of integrating various formalisms often encountered in large scale communication systems, especially in connection with dynamic access control policies as well as trust management
Resumo:
Large instruction windows and issue queues are key to exploiting greater instruction level parallelism in out-of-order superscalar processors. However, the cycle time and energy consumption of conventional large monolithic issue queues are high. Previous efforts to reduce cycle time segment the issue queue and pipeline wakeup. Unfortunately, this results in significant IPC loss. Other proposals which address energy efficiency issues by avoiding only the unnecessary tag-comparisons do not reduce broadcasts. These schemes also increase the issue latency.To address both these issues comprehensively, we propose the Scalable Lowpower Issue Queue (SLIQ). SLIQ augments a pipelined issue queue with direct indexing to mitigate the problem of delayed wakeups while reducing the cycle time. Also, the SLIQ design naturally leads to significant energy savings by reducing both the number of tag broadcasts and comparisons required.A 2 segment SLIQ incurs an average IPC loss of 0.2% over the entire SPEC CPU2000 suite, while achieving a 25.2% reduction in issue latency when compared to a monolithic 128-entry issue queue for an 8-wide superscalar processor. An 8 segment SLIQ improves scalability by reducing the issue latency by 38.3% while incurring an IPC loss of only 2.3%. Further, the 8 segment SLIQ significantly reduces the energy consumption and energy-delay product by 48.3% and 67.4% respectively on average.
Resumo:
Workstation clusters equipped with high performance interconnect having programmable network processors facilitate interesting opportunities to enhance the performance of parallel application run on them. In this paper, we propose schemes where certain application level processing in parallel database query execution is performed on the network processor. We evaluate the performance of TPC-H queries executing on a high end cluster where all tuple processing is done on the host processor, using a timed Petri net model, and find that tuple processing costs on the host processor dominate the execution time. These results are validated using a small cluster. We therefore propose 4 schemes where certain tuple processing activity is offloaded to the network processor. The first 2 schemes offload the tuple splitting activity - computation to identify the node on which to process the tuples, resulting in an execution time speedup of 1.09 relative to the base scheme, but with I/O bus becoming the bottleneck resource. In the 3rd scheme in addition to offloading tuple processing activity, the disk and network interface are combined to avoid the I/O bus bottleneck, which results in speedups up to 1.16, but with high host processor utilization. Our 4th scheme where the network processor also performs apart of join operation along with the host processor, gives a speedup of 1.47 along with balanced system resource utilizations. Further we observe that the proposed schemes perform equally well even in a scaled architecture i.e., when the number of processors is increased from 2 to 64