888 resultados para Prolonged application times
Resumo:
Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.
Resumo:
Dielectric properties of potassium titanyl phosphate have been investigated as a function of thickness and frequency, as well as annealing treatment under various atmospheres. The low frequency dielectric constant of KTP crystals is shown to depend upon the sample thickness, and this feature is attributed to the existence of surface layers. The frequency-dependent dielectric response of KTP exhibits a non-Debye type relaxation, with a distribution of relaxation times. The dielectric behavior of KTP samples annealed in various atmospheres shows that the low frequency dielectric constant is influenced by the contribution from the space charge layers. Prolonged annealing of the samples leads to a surface degradation, resulting in the formation of a surface layer of lower dielectric constant. This surface degradation is least when annealed in the presence of dry oxygen. From the analysis of the dielectric data using complex electric modulus, alpha(m) has been evaluated for the virgin and annealed samples. (C) 1996 American Institute of Physics.
Resumo:
An escape mechanism in a bistable system driven by colored noise of large but finite correlation time (tau) is analyzed. It is shown that the fluctuating potential theory [Phys. Rev. A 38, 3749 (1988)] becomes invalid in a region around the inflection points of the bistable potential, resulting in the underestimation of the mean first passage time at finite tau by this theory. It is shown that transitions at large but finite tau are caused by noise spikes, with edges rising and falling exponentially in a time of O(tau). Simulation of the dynamics of the bistable system driven by noise spikes of the above-mentioned nature clearly reveal the physical mechanism behind the transition.
Resumo:
In this article, we consider the single-machine scheduling problem with past-sequence-dependent (p-s-d) setup times and a learning effect. The setup times are proportional to the length of jobs that are already scheduled; i.e. p-s-d setup times. The learning effect reduces the actual processing time of a job because the workers are involved in doing the same job or activity repeatedly. Hence, the processing time of a job depends on its position in the sequence. In this study, we consider the total absolute difference in completion times (TADC) as the objective function. This problem is denoted as 1/LE, (Spsd)/TADC in Kuo and Yang (2007) ('Single Machine Scheduling with Past-sequence-dependent Setup Times and Learning Effects', Information Processing Letters, 102, 22-26). There are two parameters a and b denoting constant learning index and normalising index, respectively. A parametric analysis of b on the 1/LE, (Spsd)/TADC problem for a given value of a is applied in this study. In addition, a computational algorithm is also developed to obtain the number of optimal sequences and the range of b in which each of the sequences is optimal, for a given value of a. We derive two bounds b* for the normalising constant b and a* for the learning index a. We also show that, when a < a* or b > b*, the optimal sequence is obtained by arranging the longest job in the first position and the rest of the jobs in short processing time order.
Resumo:
Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.
Resumo:
Vicsek et al. proposed a biologically inspired model of self-propelled particles, which is now commonly referred to as the Vicsek model. Recently, attention has been directed at modifying the Vicsek model so as to improve convergence properties. In this paper, we propose two modification of the Vicsek model which leads to significant improvements in convergence times. The modifications involve an additional term in the heading update rule which depends only on the current or the past states of the particle's neighbors. The variation in convergence properties as the parameters of these modified versions are changed are closely investigated. It is found that in both cases, there exists an optimal value of the parameter which reduces convergence times significantly and the system undergoes a phase transition as the value of the parameter is increased beyond this optimal value. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In traction application, inverters need to have high reliability on account of wide variation in operating conditions, extreme ambient conditions, thermal cycling and varying DC link voltage. Hence it is important to have a good knowledge of switching characteristics of the devices used. The focus of this paper is to investigate and compare switching characteristics and losses of IGBT modules for traction application. Dependence of device transition times and switching energy losses on dc link voltage, device current and operating temperature is studied experimentally.
Resumo:
6PANview[1] is a Wireless Sensor Network(WSN) monitoring system for 6LoWPAN/RPL networks which we developed as an overlay network for a WSN application. A monitoring system, while performing its operations for maintaining the health of the monitored network, must also be conscious of its impact on the application performance, and must strive to minimize this impact. To this end, we propose a centralized scheduling algorithm within 6PANview which non-intrusively analyzes application traffic arrival patterns at the base station, identifies network idle periods and schedules monitoring activities. The proposed algorithm finds those periodic sequences which are likely to have given rise to the pattern of arrivals seen at the base station. Parts of those sequences are then extended to coarsely predict future traffic and find epochs where low traffic is predicted, in order to schedule monitoring traffic or other activities at these times. We present simulation results for the proposed prediction and scheduling algorithm and its implementation as part of 6PANview. As an enhancement, we briefly talk about using 6PANview's overlay network architecture for distributed scheduling.
Resumo:
Inverter dead-time, which is meant to prevent shoot-through fault, causes harmonic distortion and change in the fundamental voltage in the inverter output. Typical dead-time compensation schemes ensure that the amplitude of the fundamental output current is as desired, and also improve the current waveform quality significantly. However, even with compensation, the motor line current waveform is observed to be distorted close to the current zero-crossings. The IGBT switching transition times being significantly longer at low currents than at high currents is an important reason for this zero-crossover distortion. Hence, this paper proposes an improved dead-time compensation scheme, which makes use of the measured IGBT switching transition times at low currents. Measured line current waveforms in a 2.2 kW induction motor drive with the proposed compensation scheme are compared against those with the conventional dead-time compensation scheme and without dead-time compensation. The experimental results on the motor drive clearly demonstrate the improvement in the line current waveform quality with the proposed method.
Resumo:
The estimation of water and solute transit times in catchments is crucial for predicting the response of hydrosystems to external forcings (climatic or anthropogenic). The hydrogeochemical signatures of tracers (either natural or anthropogenic) in streams have been widely used to estimate transit times in catchments as they integrate the various processes at stake. However, most of these tracers are well suited for catchments with mean transit times lower than about 4-5 years. Since the second half of the 20th century, the intensification of agriculture led to a general increase of the nitrogen load in rivers. As nitrate is mainly transported by groundwater in agricultural catchments, this signal can be used to estimate transit times greater than several years, even if nitrate is not a conservative tracer. Conceptual hydrological models can be used to estimate catchment transit times provided their consistency is demonstrated, based on their ability to simulate the stream chemical signatures at various time scales and catchment internal processes such as N storage in groundwater. The objective of this study was to assess if a conceptual lumped model was able to simulate the observed patterns of nitrogen concentration, at various time scales, from seasonal to pluriannual and thus if it was relevant to estimate the nitrogen transit times in headwater catchments. A conceptual lumped model, representing shallow groundwater flow as two parallel linear stores with double porosity, and riparian processes by a constant nitrogen removal function, was applied on two paired agricultural catchments which belong to the Research Observatory ORE AgrHys. The Global Likelihood Uncertainty Estimation (GLUE) approach was used to estimate parameter values and uncertainties. The model performance was assessed on (i) its ability to simulate the contrasted patterns of stream flow and stream nitrate concentrations at seasonal and inter-annual time scales, (ii) its ability to simulate the patterns observed in groundwater at the same temporal scales, and (iii) the consistency of long-term simulations using the calibrated model and the general pattern of the nitrate concentration increase in the region since the beginning of the intensification of agriculture in the 1960s. The simulated nitrate transit times were found more sensitive to climate variability than to parameter uncertainty, and average values were found to be consistent with results from others studies in the same region involving modeling and groundwater dating. This study shows that a simple model can be used to simulate the main dynamics of nitrogen in an intensively polluted catchment and then be used to estimate the transit times of these pollutants in the system which is crucial to guide mitigation plans design and assessment. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Prediction of queue waiting times of jobs submitted to production parallel batch systems is important to provide overall estimates to users and can also help meta-schedulers make scheduling decisions. In this work, we have developed a framework for predicting ranges of queue waiting times for jobs by employing multi-class classification of similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k-Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the class predicted using k-NN and its neighboring classes are used to provide a set of ranges of predicted wait times with probabilities. We have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. Experiments with different production supercomputer job traces show that our prediction strategies can give correct predictions for about 77-87% of the jobs, and also result in about 12% improved accuracy when compared to the next best existing method. Experiments with our meta-scheduling strategy using different production and synthetic job traces for various system sizes, partitioning schemes and different workloads, show that the meta-scheduling strategy gives much improved performance when compared to existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47%.
Resumo:
Abstract: Avidin layer was bound on the substrate surface of Silicon wafer modified with aldehyde. The interaction between avidin and biotin was adopted for the immobilization of mouse monoclonal biotin-anti-M13 (antibody GP3)-labeled biotin. The surface was incubated in a solution containing phage M13KO7, which was trapped by the antibody GP3 with the interaction between phage M13KO7 and antibody GP3, resulting in a variation of layer thickness that was detected by imaging ellipsometry. The results showed a saturated layer of antibody GP3 with a thickness about 6.9 nm on the surface of the silicon wafer. The specific interaction between phage M13KO7 and antibody GP3 resulted in a variation of layer thickness. The layer of phage M13KO7 bound with antibody GP3 was 17.5 nm in the concentration of 1.1×1010 pfu/mL. Each variation of the layer thickness corresponded to a concentration of phage M13KO7 in the range of 0.1×1010–2.5×1010 pfu/mL, with the sensitivity of 109 pfu/mL. Compared with other methods, the optical protein-chip, requiring only short measurement time, label free, is a quantitative test, and can be visualized. This study could be significant on the interactions between the antibody and the virus, showing potential in the early diagnosis of virosis.
Resumo:
Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.
In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.
We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.
We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.
In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.
Finally, we discuss prospects for related future investigations.
Resumo:
During late - and post-glacial times lakes played a leading role in the development of the landscape of the North-west European part of USSR. A variety of geographic circumstances created great variegation of natural conditions in lakes and determined the composition of their diatoms. The basic stages of the development of the diatom flora of lakes are linked with general climatic changes. The deepwater regions of large periglacial lakes of the North-west USSR are inhabited by plankton diatoms of the genera Melosira and Cyclotella. Diatom analysis is further applied for the study of the history of the lakes of north-west USSR.