837 resultados para Measure of time
Resumo:
Our everyday visual experience frequently involves searching for objects in clutter. Why are some searches easy and others hard? It is generally believed that the time taken to find a target increases as it becomes similar to its surrounding distractors. Here, I show that while this is qualitatively true, the exact relationship is in fact not linear. In a simple search experiment, when subjects searched for a bar differing in orientation from its distractors, search time was inversely proportional to the angular difference in orientation. Thus, rather than taking search reaction time (RT) to be a measure of target-distractor similarity, we can literally turn search time on its head (i.e. take its reciprocal 1/RT) to obtain a measure of search dissimilarity that varies linearly over a large range of target-distractor differences. I show that this dissimilarity measure has the properties of a distance metric, and report two interesting insights come from this measure: First, for a large number of searches, search asymmetries are relatively rare and when they do occur, differ by a fixed distance. Second, search distances can be used to elucidate object representations that underlie search - for example, these representations are roughly invariant to three-dimensional view. Finally, search distance has a straightforward interpretation in the context of accumulator models of search, where it is proportional to the discriminative signal that is integrated to produce a response. This is consistent with recent studies that have linked this distance to neuronal discriminability in visual cortex. Thus, while search time remains the more direct measure of visual search, its reciprocal also has the potential for interesting and novel insights. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Knowledge about program worst case execution time (WCET) is essential in validating real-time systems and helps in effective scheduling. One popular approach used in industry is to measure execution time of program components on the target architecture and combine them using static analysis of the program. Measurements need to be taken in the least intrusive way in order to avoid affecting accuracy of estimated WCET. Several programs exhibit phase behavior, wherein program dynamic execution is observed to be composed of phases. Each phase being distinct from the other, exhibits homogeneous behavior with respect to cycles per instruction (CPI), data cache misses etc. In this paper, we show that phase behavior has important implications on timing analysis. We make use of the homogeneity of a phase to reduce instrumentation overhead at the same time ensuring that accuracy of WCET is not largely affected. We propose a model for estimating WCET using static worst case instruction counts of individual phases and a function of measured average CPI. We describe a WCET analyzer built on this model which targets two different architectures. The WCET analyzer is observed to give safe estimates for most benchmarks considered in this paper. The tightness of the WCET estimates are observed to be improved for most benchmarks compared to Chronos, a well known static WCET analyzer.
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]
Resumo:
We propose an iterative algorithm to detect transient segments in audio signals. Short time Fourier transform(STFT) is used to detect rapid local changes in the audio signal. The algorithm has two steps that iteratively - (a) calculate a function of the STFT and (b) build a transient signal. A dynamic thresholding scheme is used to locate the potential positions of transients in the signal. The iterative procedure ensures that genuine transients are built up while the localised spectral noise are suppressed by using an energy criterion. The extracted transient signal is later compared to a ground truth dataset. The algorithm performed well on two databases. On the EBU-SQAM database of monophonic sounds, the algorithm achieved an F-measure of 90% while on our database of polyphonic audio an F-measure of 91% was achieved. This technique is being used as a preprocessing step for a tempo analysis algorithm and a TSR (Transients + Sines + Residue) decomposition scheme.
Resumo:
When stimulated by a point source of cyclic AMP, a starved amoeba of Dictyostelium discoideum responds by putting out a hollow balloon-like membrane extension followed by a pseudopod. The effect of the stimulus is to influence the position where either of these protrusions is made on the cell rather than to cause them to be made. Because the pseudopod forms perpendicular to the cell surface, its location is a measure of the precision with which the cell can locate the cAMP source. Cells beyond 1 h of starvation respond non-randomly with a precision that improves steadily thereafter. A cell that is starved for 1-2 h can locate the source accurately 43% of the time; and if starved for 6-7 h, 87% of the time. The response always has a high scatter; population-level heterogeneity reflects stochasticity in single cell behaviour. From the angular distribution of the response its maximum information content is estimated to be 2-3 bits. In summary, we quantitatively demonstrate the stochastic nature of the directional response and the increase in its accuracy over time.
Resumo:
Even though satellite observations are the most effective means to gather global information in a short span of time, the challenges in this field still remain over continental landmass, despite most of the aerosol sources being land-based. This is a hurdle in global and regional aerosol climate forcing assessment. Retrieval of aerosol properties over land is complicated due to irregular terrain characteristics and the high and largely uncertain surface reflection which acts as `noise' to the much smaller amount of radiation scattered by aerosols, which is the `signal'. In this paper, we describe a satellite sensor the - `Aerosol Satellite (AEROSAT)', which is capable of retrieving aerosols over land with much more accuracy and reduced dependence on models. The sensor, utilizing a set of multi-spectral and multi-angle measurements of polarized components of radiation reflected from the Earth's surface, along with measurements of thermal infrared broadband radiance, results in a large reduction of the `noise' component (compared to the `signal). A conceptual engineering model of AEROSAT has been designed, developed and used to measure the land-surface features in the visible spectral band. Analysing the received signals using a polarization radiative transfer approach, we demonstrate the superiority of this method. It is expected that satellites carrying sensors following the AEROSAT concept would be `self-sufficient', to obtain all the relevant information required for aerosol retrieval from its own measurements.
Resumo:
Managing heat produced by computer processors is an important issue today, especially when the size of processors is decreasing rapidly while the number of transistors in the processor is increasing rapidly. This poster describes a preliminary study of the process of adding carbon nanotubes (CNTs) to a standard silicon paste covering a CPU. Measurements were made in two rounds of tests to compare the rate of cool-down with and without CNTs present. The silicon paste acts as an interface between the CPU and the heat sink, increasing the heat transfer rate away from the CPU. To the silicon paste was added 0.05% by weight of CNTs. These were not aligned. A series of K-type thermocouples was used to measure the temperature as a function of time in the vicinity of the CPU, following its shut-off. An Omega data acquisition system was attached to the thermocouples. The CPU temperature was not measured directly because attachment of a thermocouple would have prevented its automatic shut-off A thermocouple in the paste containing the CNTs actually reached a higher temperature than the standard paste, an effect easily explained. But the rate of cooling with the CNTs was about 4.55% better.
Resumo:
Various ecological and other complex dynamical systems may exhibit abrupt regime shifts or critical transitions, wherein they reorganize from one stable state to another over relatively short time scales. Because of potential losses to ecosystem services, forecasting such unexpected shifts would be valuable. Using mathematical models of regime shifts, ecologists have proposed various early warning signals of imminent shifts. However, their generality and applicability to real ecosystems remain unclear because these mathematical models are considered too simplistic. Here, we investigate the robustness of recently proposed early warning signals of regime shifts in two well-studied ecological models, but with the inclusion of time-delayed processes. We find that the average variance may either increase or decrease prior to a regime shift and, thus, may not be a robust leading indicator in time-delayed ecological systems. In contrast, changing average skewness, increasing autocorrelation at short time lags, and reddening power spectra of time series of the ecological state variable all show trends consistent with those of models with no time delays. Our results provide insights into the robustness of early warning signals of regime shifts in a broader class of ecological systems.
Resumo:
We present a comprehensive study of two of the most experimentally relevant extensions of Kitaev's spinless model of a one-dimensional p-wave superconductor: those involving (i) longer-range hopping and superconductivity and (ii) inhomogeneous potentials. We commence with a pedagogical review of the spinless model and, as a means of characterizing topological phases exhibited by the systems studied here, we introduce bulk topological invariants as well as those derived from an explicit consideration of boundary modes. In time-reversal symmetric systems, we find that the longer range hopping leads to topological phases characterized by multiple Majorana modes. In particular, we investigate a spin model that respects a duality and maps to a fermionic model with multiple Majorana modes; we highlight the connection between these topological phases and the broken symmetry phases in the original spin model. In the presence of time-reversal symmetry breaking terms, we show that the topological phase diagram is characterized by an extended gapless regime. For the case of inhomogeneous potentials, we explore phase diagrams of periodic, quasiperiodic, and disordered systems. We present a detailed mapping between normal state localization properties of such systems and the topological phases of the corresponding superconducting systems. This powerful tool allows us to leverage the analyses of Hofstadter's butterfly and the vast literature on Anderson localization to the question of Majorana modes in superconducting quasiperiodic and disordered systems, respectively. We briefly touch upon the synergistic effects that can be expected in cases where long-range hopping and disorder are both present.
Resumo:
Research has been undertaken to ascertain the predictability of non-stationary time series using wavelet and Empirical Mode Decomposition (EMD) based time series models. Methods have been developed in the past to decompose a time series into components. Forecasting of these components combined with random component could yield predictions. Using this ideology, wavelet and EMD analyses have been incorporated separately which decomposes a time series into independent orthogonal components with both time and frequency localizations. The component series are fit with specific auto-regressive models to obtain forecasts which are later combined to obtain the actual predictions. Four non-stationary streamflow sites (USGS data resources) of monthly total volumes and two non-stationary gridded rainfall sites (IMD) of monthly total rainfall are considered for the study. The predictability is checked for six and twelve months ahead forecasts across both the methodologies. Based on performance measures, it is observed that wavelet based method has better prediction capabilities over EMD based method despite some of the limitations of time series methods and the manner in which decomposition takes place. Finally, the study concludes that the wavelet based time series algorithm can be used to model events such as droughts with reasonable accuracy. Also, some modifications that can be made in the model have been discussed that could extend the scope of applicability to other areas in the field of hydrology. (C) 2013 Elesvier B.V. All rights reserved.
Resumo:
Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.
Resumo:
Complex biological systems such as the human brain can be expected to be inherently nonlinear and hence difficult to model. Most of the previous studies on investigations of brain function have either used linear models or parametric nonlinear models. In this paper, we propose a novel application of a nonlinear measure of phase synchronization based on recurrences, correlation between probabilities of recurrence (CPR), to study seizures in the brain. The advantage of this nonparametric method is that it makes very few assumptions thus making it possible to investigate brain functioning in a data-driven way. We have demonstrated the utility of CPR measure for the study of phase synchronization in multichannel seizure EEG recorded from patients with global as well as focal epilepsy. For the case of global epilepsy, brain synchronization using thresholded CPR matrix of multichannel EEG signals showed clear differences in results obtained for epileptic seizure and pre-seizure. Brain headmaps obtained for seizure and preseizure cases provide meaningful insights about synchronization in the brain in those states. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value. Comparative studies with linear correlation have shown that the nonlinear measure CPR outperforms the linear correlation measure. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an experimental procedure to determine the acoustic and vibration behavior of an inverter-fed induction motor based on measurements of the current spectrum, acoustic noise spectrum, overall noise in dB, and overall A-weighted noise in dBA. Measurements are carried out on space-vector modulated 8-hp and 3-hp induction motor drives over a range of carrier frequencies at different modulation frequencies. The experimental data help to distinguish between regions of high and low acoustic noise levels. The measurements also bring out the impact of carrier frequency on the acoustic noise. The sensitivity of the overall noise to carrier frequency is indicative of the relative dominance of the high-frequency electromagnetic noise over mechanical and aerodynamic components of noise. Based on the measured current and acoustic noise spectra, the ratio of dynamic deflection on the stator surface to the product of fundamental and harmonic current amplitudes is obtained at each operating point. The variation of this ratio of deflection to current product with carrier frequency indicates the resonant frequency clearly and also gives a measure of the amplification of vibration at frequencies close to the resonant frequency. This ratio is useful to predict the magnitude of acoustic noise corresponding to significant time-harmonic currents flowing in the stator winding.
Resumo:
It is well known that wrist pulse signals contain information about the status of health of a person and hence diagnosis based on pulse signals has assumed great importance since long time. In this paper the efficacy of signal processing techniques in extracting useful information from wrist pulse signals has been demonstrated by using signals recorded under two different experimental conditions viz. before lunch condition and after lunch condition. We have used Pearson's product-moment correlation coefficient, which is an effective measure of phase synchronization, in making a statistical analysis of wrist pulse signals. Contour plots and box plots are used to illustrate various differences. Two-sample t-tests show that the correlations show statistically significant differences between the groups. Results show that the correlation coefficient is effective in distinguishing the changes taking place after having lunch. This paper demonstrates the ability of the wrist pulse signals in detecting changes occurring under two different conditions. The study assumes importance in view of limited literature available on the analysis of wrist pulse signals in the case of food intake and also in view of its potential health care applications.
Resumo:
Diffusion-a measure of dynamics, and entropy-a measure of disorder in the system are found to be intimately correlated in many systems, and the correlation is often strongly non-linear. We explore the origin of this complex dependence by studying diffusion of a point Brownian particle on a model potential energy surface characterized by ruggedness. If we assume that the ruggedness has a Gaussian distribution, then for this model, one can obtain the excess entropy exactly for any dimension. By using the expression for the mean first passage time, we present a statistical mechanical derivation of the well-known and well-tested scaling relation proposed by Rosenfeld between diffusion and excess entropy. In anticipation that Rosenfeld diffusion-entropy scaling (RDES) relation may continue to be valid in higher dimensions (where the mean first passage time approach is not available), we carry out an effective medium approximation (EMA) based analysis of the effective transition rate and hence of the effective diffusion coefficient. We show that the EMA expression can be used to derive the RDES scaling relation for any dimension higher than unity. However, RDES is shown to break down in the presence of spatial correlation among the energy landscape values. (C) 2015 AIP Publishing LLC.