983 resultados para simple timing task
Resumo:
A period timing device suitable for processing laser Doppler anemometer signals has been described here. The important features of this instrument are: it is inexpensive, simple to operate, and easy to fabricate. When the concentration of scattering particles is low the Doppler signal is in the form of a burst and the Doppler frequency is measured by timing the zero crossings of the signal. But the presence of noise calls for the use of validation criterion, and a 5–8 cycles comparison has been used in this instrument. Validation criterion requires the differential count between the 5 and 8 cycles to be multiplied by predetermined numbers that prescribe the accuracy of measurement. By choosing these numbers to be binary numbers, much simplification in circuit design has been accomplished since this permits the use of shift registers for multiplication. Validation accuracies of 1.6%, 3.2%, 6.3%, and 12.5% are possible with this device. The design presented here is for a 16-bit processor and uses TTL components. By substituting Schottky barrier TTLs the clock frequency can be increased from about 10 to 30 MHz resulting in an extension in the range of the instrument. Review of Scientific Instruments is copyrighted by The American Institute of Physics.
Resumo:
We explored the brain's ability to quickly prevent a pre-potent but unwanted motor response. To address this, transcranial magnetic stimulation was delivered over the motor cortex (hand representation) to probe excitability changes immediately after somatosensory cues prompted subjects to either move as fast as possible or withhold movement. Our results showed a difference in motor cortical excitability 90 ms post-stimulus contingent on cues to either promote or prevent movement. We suggest that our study design emphasizing response speed coupled with well-defined early probes allowed us to extend upon similar past investigations into the timing of response inhibition.
Resumo:
Neurons generate spikes reliably with millisecond precision if driven by a fluctuating current--is it then possible to predict the spike timing knowing the input? We determined parameters of an adapting threshold model using data recorded in vitro from 24 layer 5 pyramidal neurons from rat somatosensory cortex, stimulated intracellularly by a fluctuating current simulating synaptic bombardment in vivo. The model generates output spikes whenever the membrane voltage (a filtered version of the input current) reaches a dynamic threshold. We find that for input currents with large fluctuation amplitude, up to 75% of the spike times can be predicted with a precision of +/-2 ms. Some of the intrinsic neuronal unreliability can be accounted for by a noisy threshold mechanism. Our results suggest that, under random current injection into the soma, (i) neuronal behavior in the subthreshold regime can be well approximated by a simple linear filter; and (ii) most of the nonlinearities are captured by a simple threshold process.
Resumo:
A paradox of memory research is that repeated checking results in a decrease in memory certainty, memory vividness and confidence [van den Hout, M. A., & Kindt, M. (2003a). Phenomenological validity of an OCD-memory model and the remember/know distinction. Behaviour Research and Therapy, 41, 369–378; van den Hout, M. A., & Kindt, M. (2003b). Repeated checking causes memory distrust. Behaviour Research and Therapy, 41, 301–316]. Although these findings have been mainly attributed to changes in episodic long-term memory, it has been suggested [Shimamura, A. P. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313–323] that representations in working memory could already suffer from detrimental checking. In two experiments we set out to test this hypothesis by employing a delayed-match-to-sample working memory task. Letters had to be remembered in their correct locations, a task that was designed to engage the episodic short-term buffer of working memory [Baddeley, A. D. (2000). The episodic buffer: a new component in working memory? Trends in Cognitive Sciences, 4, 417–423]. Of most importance, we introduced an intermediate distractor question that was prone to induce frustrating and unnecessary checking on trials where no correct answer was possible. Reaction times and confidence ratings on the actual memory test of these trials confirmed the success of this manipulation. Most importantly, high checkers [cf. VOCI; Thordarson, D. S., Radomsky, A. S., Rachman, S., Shafran, R, Sawchuk, C. N., & Hakstian, A. R. (2004). The Vancouver obsessional compulsive inventory (VOCI). Behaviour Research and Therapy, 42(11), 1289–1314] were less accurate than low checkers when frustrating checking was induced, especially if the experimental context actually emphasized the irrelevance of the misleading question. The clinical relevance of this result was substantiated by means of an extreme groups comparison across the two studies. The findings are discussed in the context of detrimental checking and lack of distractor inhibition as a way of weakening fragile bindings within the episodic short-term buffer of Baddeley's (2000) model. Clinical implications, limitations and future research are considered.
Resumo:
This paper describes an initiative in the Faculty of Health at the Queensland University of Technology, Australia, where a short writing task was introduced to first year undergraduates in four courses including Public Health, Nursing, Social Work and Human Services, and Human Movement Studies. Over 1,000 students were involved in the trial. The task was assessed using an adaptation of the MASUS Procedure (Measuring the Academic Skills of University Students) (Webb & Bonanno, 1994). Feedback to the students including MASUS scores then enabled students to be directed to developmental workshops targeting their academic literacy needs. Students who achieved below the benchmark score were required to attend academic writing workshops in order to obtain the same summative 10% that was obtained by those who had achieved above the benchmark score. The trial was very informative, in terms of determining task appropriateness and timing, student feedback, student use of support, and student perceptions of the task and follow-up workshops. What we learned from the trial will be presented with a view to further refinement of this initiative.
Resumo:
Designing practical rules for controlling invasive species is a challenging task for managers, particularly when species are long-lived, have complex life cycles and high dispersal capacities. Previous findings derived from plant matrix population analyses suggest that effective control of long-lived invaders may be achieved by focusing on killing adult plants. However, the cost-effectiveness of managing different life stages has not been evaluated. We illustrate the benefits of integrating matrix population models with decision theory to undertake this evaluation, using empirical data from the largest infestation of mesquite (Leguminosae: Prosopis spp) within Australia. We include in our model the mesquite life cycle, different dispersal rates and control actions that target individuals at different life stages with varying costs, depending on the intensity of control effort. We then use stochastic dynamic programming to derive cost-effective control strategies that minimize the cost of controlling the core infestation locally below a density threshold and the future cost of control arising from infestation of adjacent areas via seed dispersal. Through sensitivity analysis, we show that four robust management rules guide the allocation of resources between mesquite life stages for this infestation: (i) When there is no seed dispersal, no action is required until density of adults exceeds the control threshold and then only control of adults is needed; (ii) when there is seed dispersal, control strategy is dependent on knowledge of the density of adults and large juveniles (LJ) and broad categories of dispersal rates only; (iii) if density of adults is higher than density of LJ, controlling adults is most cost-effective; (iv) alternatively, if density of LJ is equal or higher than density of adults, management efforts should be spread between adults, large and to a lesser extent small juveniles, but never saplings. Synthesis and applications.In this study, we show that simple rules can be found for managing invasive plants with complex life cycles and high dispersal rates when population models are combined with decision theory. In the case of our mesquite population, focussing effort on controlling adults is not always the most cost-effective way to meet our management objective.
Resumo:
The INEX workshop is concerned with evaluating the effectiveness of XML retrieval systems. In 2004 a natural language query task was added to the INEX Ad hoc track. Standard INEX Ad hoc topic titles are specified in NEXI -- a simplified and restricted subset of XPath, with a similar feel, and yet with a distinct IR flavour and interpretation. The syntax of NEXI is rigid and it imposes some limitations on the kind of information need that it can faithfully capture. At INEX 2004 the NLP question to be answered was simple -- is it practical to use a natural language query that is the equivalent of the formal NEXI title? The results of this experiment are reported and some information on the future direction of the NLP task is presented.
Resumo:
Two experimental studies were conducted to examine whether the stress-buffering effects of behavioral control on work task responses varied as a function of procedural information. Study 1 manipulated low and high levels of task demands, behavioral control, and procedural information for 128 introductory psychology students completing an in-basket activity. ANOVA procedures revealed a significant three-way interaction among these variables in the prediction of subjective task performance and task satisfaction. It was found that procedural information buffered the negative effects of task demands on ratings of performance and satisfaction only under conditions of low behavioral control. This pattern of results suggests that procedural information may have a compensatory effect when the work environment is characterized by a combination of high task demands and low behavioral control. Study 2 (N=256) utilized simple and complex versions of the in-basket activity to examine the extent to which the interactive relationship among task demands, behavioral control, and procedural information varied as a function of task complexity. There was further support for the stress-buffering role of procedural information on work task responses under conditions of low behavioral control. This effect was, however, only present when the in-basket activity was characterized by high task complexity, suggesting that the interactive relationship among these variables may depend on the type of tasks performed at work.
Resumo:
One of the key problems in the design of any incompletely connected multiprocessor system is to appropriately assign the set of tasks in a program to the Processing Elements (PEs) in the system. The task assignment problem has proven difficult both in theory and in practice. This paper presents a simple and efficient heuristic algorithm for assigning program tasks with precedence and communication constraints to the PEs in a Message-based Multiple-bus Multiprocessor System, M3, so that the total execution time for the program is minimized. The algorithm uses a cost function: “Minimum Distance and Parallel Transfer” to minimize the completion time. The effectiveness of the algorithm has been demonstrated by comparing the results with (i) the lower bound on the execution time of a program (task) graph and (ii) a random assignment.
Resumo:
We consider the problem of devising incentive strategies for viral marketing of a product. In particular, we assume that the seller can influence penetration of the product by offering two incentive programs: a) direct incentives to potential buyers (influence) and b) referral rewards for customers who influence potential buyers to make the purchase (exploit connections). The problem is to determine the optimal timing of these programs over a finite time horizon. In contrast to algorithmic perspective popular in the literature, we take a mean-field approach and formulate the problem as a continuous-time deterministic optimal control problem. We show that the optimal strategy for the seller has a simple structure and can take both forms, namely, influence-and-exploit and exploit-and-influence. We also show that in some cases it may optimal for the seller to deploy incentive programs mostly for low degree nodes. We support our theoretical results through numerical studies and provide practical insights by analyzing various scenarios.
Resumo:
Multilevel inverters with dodecagonal (12-sided polygon) voltage space vector (SV) structures have advantages like extension of linear modulation range, elimination of fifth and seventh harmonics in phase voltages and currents for the full modulation range including extreme 12-step operation, reduced device voltage ratings, lesser dv/dt stresses on devices and motor phase windings resulting in lower EMI/EMC problems, and lower switching frequency-making it more suitable for high-power drive applications. This paper proposes a simple method to obtain pulsewidth modulation (PWM) timings for a dodecagonal voltage SV structure using only sampled reference voltages. In addition to this, a carrier-based method for obtaining the PWM timings for a general N-level dodecagonal structure is proposed in this paper for the first time. The algorithm outputs the triangle information and the PWM timing values which can be set as the compare values for any carrier-based hardware PWM module to obtain SV PWM like switching sequences. The proposed method eliminates the need for angle estimation, computation of modulation indices, and iterative search algorithms that are typical in multilevel dodecagonal SV systems. The proposed PWM scheme was implemented on a five-level dodecagonal SV structure. Exhaustive simulation and experimental results for steady-state and transient conditions are presented to validate the proposed method.
Resumo:
During the last two decades, analysis of 1/f noise in cognitive science has led to a considerable progress in the way we understand the organization of our mental life. However, there is still a lack of specific models providing explanations of how 1/f noise is generated in coupled brain-body-environment systems, since existing models and experiments typically target either externally observable behaviour or isolated neuronal systems but do not address the interplay between neuronal mechanisms and sensorimotor dynamics. We present a conceptual model of a minimal neurorobotic agent solving a behavioural task that makes it possible to relate mechanistic (neurodynamic) and behavioural levels of description. The model consists of a simulated robot controlled by a network of Kuramoto oscillators with homeostatic plasticity and the ability to develop behavioural preferences mediated by sensorimotor patterns. With only three oscillators, this simple model displays self-organized criticality in the form of robust 1/f noise and a wide multifractal spectrum. We show that the emergence of self-organized criticality and 1/f noise in our model is the result of three simultaneous conditions: a) non-linear interaction dynamics capable of generating stable collective patterns, b) internal plastic mechanisms modulating the sensorimotor flows, and c) strong sensorimotor coupling with the environment that induces transient metastable neurodynamic regimes. We carry out a number of experiments to show that both synaptic plasticity and strong sensorimotor coupling play a necessary role, as constituents of self-organized criticality, in the generation of 1/f noise. The experiments also shown to be useful to test the robustness of 1/f scaling comparing the results of different techniques. We finally discuss the role of conceptual models as mediators between nomothetic and mechanistic models and how they can inform future experimental research where self-organized critically includes sensorimotor coupling among the essential interaction-dominant process giving rise to 1/f noise.
Resumo:
This report argues for greatly increased resources in terms of data collection facilities and staff to collect, process, and analyze the data, and to communicate the results, in order for NMFS to fulfill its mandate to conserve and manage marine resources. In fact, the authors of this report had great difficulty defining the "ideal" situation to which fisheries stock assessments and management should aspire. One of the primary objectives of fisheries management is to develop sustainable harvest policies that minimize the risks of overfishing both target species and associated species. This can be achieved in a wide spectrum of ways, ranging between the following two extremes. The first is to implement only simple management measures with correspondingly simple assessment demands, which will usually mean setting fishing mortality targets at relatively low levels in order to reduce the risk of unknowingly overfishing or driving ecosystems towards undesirable system states. The second is to expand existing data collection and analysis programs to provide an adequate knowledge base that can support higher fishing mortality targets while still ensuring low risk to target and associated species and ecosystems. However, defining "adequate" is difficult, especially when scientists have not even identified all marine species, and information on catches, abundances, and life histories of many target species, and most associated species, is sparse. Increasing calls from the public, stakeholders, and the scientific community to implement ecosystem-based stock assessment and management make it even more difficult to define "adequate," especially when "ecosystem-based management" is itself not well-defined. In attempting to describe the data collection and assessment needs for the latter, the authors took a pragmatic approach, rather than trying to estimate the resources required to develop a knowledge base about the fine-scale detailed distributions, abundances, and associations of all marine species. Thus, the specified resource requirements will not meet the expectations of some stakeholders. In addition, the Stock Assessment Improvement Plan is designed to be complementary to other related plans, and therefore does not duplicate the resource requirements detailed in those plans, except as otherwise noted.
Resumo:
3D Direct Numerical Simulations (DNS) of autoignition in turbulent non-premixed flows between fuel and hotter air have been carried out using both 1-step and complex chemistry consisting of a 22 species n-heptane mechanism to investigate spontaneous ignition timing and location. The simple chemistry results showed that the previous findings from 2D DNS that ignition occurred at the most reactive mixture fraction (ξMR) and at small values of the conditional scalar dissipation rate (N|ξMR) are valid also for 3D turbulent mixing fields. Performing the same simulation many times with different realizations of the initial velocity field resulted in a very narrow statistical distribution of ignition delay time, consistent with a previous conjecture that the first appearance of ignition is correlated with the low-N content of the conditional probability density function of N. The simulations with complex chemistry for conditions outside the Negative Temperature Coefficient (NTC) regime show behaviour similar to the single-step chemistry simulations. However, in the NTC regime, the most reactive mixture fraction is very rich and ignition seems to occur at high values of scalar dissipation. Copyright © 2006 by ASME.