948 resultados para Time separation of events
Resumo:
The catalytic properties of enzymes are usually evaluated by measuring and analyzing reaction rates. However, analyzing the complete time course can be advantageous because it contains additional information about the properties of the enzyme. Moreover, for systems that are not at steady state, the analysis of time courses is the preferred method. One of the major barriers to the wide application of time courses is that it may be computationally more difficult to extract information from these experiments. Here the basic approach to analyzing time courses is described, together with some examples of the essential computer code to implement these analyses. A general method that can be applied to both steady state and non-steady-state systems is recommended. (C) 2001 academic Press.
Resumo:
Background. Although digital and videotaped images are known to be comparable for the evaluation of left ventricular function, their relative accuracy for assessment of more complex anatomy is unclear. We sought to compare reading time, storage costs, and concordance of video and digital interpretations across multiple observers and sites. Methods. One hundred one patients with valvular (90 mitral, 48 aortic, 80 tricuspid) disease were selected prospectively, and studies were stored according to video and standardized digital protocols. The same reviewer interpreted video and digital images independently and at different times with the use of a standard report form to evaluate 40 items (eg, severity of stenosis or regurgitation, leaflet thickening, and calcification) as normal or mildly, moderately, or severely abnormal Concordance between modalities was expressed at kappa Major discordance (difference of >1 level of severity) was ascribed to the modality that gave the lesser severity. CD-ROM was used to store digital data (20:1 lossy compression), and super-VHS video-tape was used to store video data The reading time and storage costs for each modality were compared Results. Measured parameters were highly concordant (ejection fraction was 52% +/- 13% by both). Major discordance was rare, and lesser values were reported with digital rather than video interpretation in the categories of aortic and mitral valve thicken ing (1% to 2%) and severity of mitral regurgitation (2%). Digital reading time was 6.8 +/- 2.4 minutes, 38% shorter than with video (11.0 +/- 3.0, range 8 to 22 minutes, P < .001). Compressed digital studies had an average size of 60 <plus/minus> 14 megabytes (range 26 to 96 megabytes). Storage cost for video was A$0.62 per patient (18 studies per tape, total cost A$11.20), compared with A$0.31 per patient for digital storage (8 studies per CD-ROM, total cost A$2.50). Conclusion. Digital and video interpretation were highly concordant; in the few cases of major discordance, the digital scores were lower, perhaps reflecting undersampling. Use of additional views and longer clips may be indicated to minimize discordance with video in patients with complex problems. Digital interpretation offers a significant reduction in reading times and the cost of archiving.
Resumo:
Animal-based theories of Pavlovian conditioning propose that patterning discriminations are solved using unique cues or immediate configuring. Recent studies with humans, however, provided evidence that in positive and negative patterning two different rules are utilized. The present experiment was designed to provide further support for this proposal by tracking the time course of the allocation of cognitive resources. One group was trained in a positive patterning; schedule (A-, B-, AB+) and a second in a negative patterning schedule (A+, B+, AB-). Electrodermal responses and secondary task probe reaction time were measured. In negative patterning, reaction times were slower during reinforced stimuli than during non-reinforced stimuli at both probe positions while there were no differences in positive patterning. These results support the assumption that negative patterning is solved using a rule that is more complex and requires more resources than does the rule employed to solve positive patterning. (C) 2001 Elsevier Science (USA).
Resumo:
A combined procedure for separating Lu, Hf, Sm, Nd, and rare earth elements (REEs) from a single sample digest is presented. The procedure consists of the following five steps: (1) sample dissolution via sodium peroxide sintering; (2) separation of the high field strength elements from the REEs and other matrix elements by a HF-free anion-exchange column procedure; (3) purification of Hf on a cation-exchange resin; (4) separation of REEs from other matrix elements by cation exchange; (5) Lu, Sm, and Nd separation from the other REEs by reversed-phase ion chromatography. Analytical reproducibilities of Sm-Nd and Lu-Hf isotope systematics are demonstrated for standard solutions and international rock reference materials. Results show overall good reproducibilities for Sm-Nd systematics independent of the rock type analyzed. For the Lu-Hf systematics, the reproducibility of the parent/daughter ratio is much better for JB-1 (basalt) than for two analyzed felsic crustal rocks (DR-N and an Archaean granitoid). It is demonstrated that this poorer reproducibility of the Lu/Hf ratio is truly caused by sample heterogeneity; thus, results are geologically reasonable.
Resumo:
The population growth of a Staphylococcus aureus culture, an active colloidal system of spherical cells, was followed by rheological measurements, under steady-state and oscillatory shear flows. We observed a rich viscoelastic behavior as a consequence of the bacteria activity, namely, of their multiplication and density-dependent aggregation properties. In the early stages of growth (lag and exponential phases), the viscosity increases by about a factor of 20, presenting several drops and full recoveries. This allows us to evoke the existence of a percolation phenomenon. Remarkably, as the bacteria reach their late phase of development, in which the population stabilizes, the viscosity returns close to its initial value. Most probably, this is caused by a change in the bacteria physiological activity and in particular, by the decrease of their adhesion properties. The viscous and elastic moduli exhibit power-law behaviors compatible with the "soft glassy materials" model, whose exponents are dependent on the bacteria growth stage. DOI: 10.1103/PhysRevE.87.030701.
Resumo:
A recent trend in distributed computer-controlled systems (DCCS) is to interconnect the distributed computing elements by means of multi-point broadcast networks. Since the network medium is shared between a number of network nodes, access contention exists and must be solved by a medium access control (MAC) protocol. Usually, DCCS impose real-time constraints. In essence, by real-time constraints we mean that traffic must be sent and received within a bounded interval, otherwise a timing fault is said to occur. This motivates the use of communication networks with a MAC protocol that guarantees bounded access and response times to message requests. PROFIBUS is a communication network in which the MAC protocol is based on a simplified version of the timed-token protocol. In this paper we address the cycle time properties of the PROFIBUS MAC protocol, since the knowledge of these properties is of paramount importance for guaranteeing the real-time behaviour of a distributed computer-controlled system which is supported by this type of network.
Resumo:
Controller Area Network (CAN) is a fieldbus network suitable for small-scale Distributed Computer Controlled Systems, being appropriate for transferring short real-time messages. Nevertheless, it must be understood that the continuity of service is not fully guaranteed, since it may be disturbed by temporary periods of network inaccessibility [1]. In this paper, such temporary periods of network inaccessibility are integrated in the response time analysis of CAN networks. The achieved results emphasise that, in the presence of temporary periods of network inaccessibility, a CAN network is not able to provide different integrity levels to the supported applications, since errors in low priority messages interfere with the response time of higher priority message streams.
Resumo:
WiDom is a wireless prioritized medium access control protocol which offers very large number of priority levels. Hence, it brings the potential to employ non-preemptive static-priority scheduling and schedulability analysis for a wireless channel assuming that the overhead of WiDom is modeled properly. Recent research has created a new version of WiDom (we call it: Slotted WiDom) which offers lower overhead compared to the previous version. In this paper we propose a new schedulability analysis for slotted WiDom and extend it to work for message streams with release jitter. Furthermore, to provide an accurate timing analysis, we must include the effect of transmission faults on message latencies. Thus, in the proposed analysis we consider the existence of different noise sources and develop the analysis for the case where messages are transmitted under noisy wireless channels. Evaluation of the proposed analysis is done by testing the slotted WiDom in two different modes on a real test-bed. The results from the experiments provide a firm validation on our findings.
Resumo:
This paper proposes a global multiprocessor scheduling algorithm for the Linux kernel that combines the global EDF scheduler with a priority-aware work-stealing load balancing scheme, enabling parallel real-time tasks to be executed on more than one processor at a given time instant. We state that some priority inversion may actually be acceptable, provided it helps reduce contention, communication, synchronisation and coordination between parallel threads, while still guaranteeing the expected system’s predictability. Experimental results demonstrate the low scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
The mainline Linux Kernel is not designed forhard real-time systems; it only fits the requirements of soft realtimesystems. In recent years, a kernel developer communityhas been working on the PREEMPT-RT patch. This patch(that aims to get a fully preemptible kernel) adds some realtimecapabilities to the Linux kernel. However, in terms ofscheduling policies, the real-time scheduling class of Linux islimited to the First-In-First-Out (SCHED_FIFO) and Round-Robin (SCHED_RR) scheduling policies. These scheduling policiesare however quite limited in terms of realtime performance.Therefore, in this paper, we report one importantcontribution for adding more advanced real-time capabilitiesto the Linux Kernel. Specifically, we describe modificationsto the (PREEMPT-RT patched) Linux kernel to supportreal-time slot-based task-splitting scheduling algorithms. Ourpreliminary evaluation shows that our implementation exhibitsa real-time performance that is superior to the schedulingpolicies provided by the current version of PREMPT-RT. Thisis a significant add-on to a widely adopted operating system.
Resumo:
The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Resumo:
Compositional real-time scheduling clearly requires that ”normal” real-time scheduling challenges are addressed but challenges intrinsic to compositionality must be addressed as well, in particular: (i) how should interfaces be described? and (ii) how should numerical values be assigned to parameters constituting the interfaces? The real-time systems community has traditionally used narrow interfaces for describing a component (for example, a utilization/bandwidthlike metric and the distribution of this bandwidth in time). In this paper, we introduce the concept of competitive ratio of an interface and show that typical narrow interfaces cause poor performance for scheduling constrained-deadline sporadic tasks (competitive ratio is infinite). Therefore, we explore more expressive interfaces; in particular a class called medium-wide interfaces. For this class, we propose an interface type and show how the parameters of the interface should be selected. We also prove that this interface is 8-competitive.
Resumo:
Real-time scheduling usually considers worst-case values for the parameters of task (or message stream) sets, in order to provide safe schedulability tests for hard real-time systems. However, worst-case conditions introduce a level of pessimism that is often inadequate for a certain class of (soft) real-time systems. In this paper we provide an approach for computing the stochastic response time of tasks where tasks have inter-arrival times described by discrete probabilistic distribution functions, instead of minimum inter-arrival (MIT) values.
Resumo:
Among the most important measures to prevent wild forest fires is the use of prescribed and controlled burning actions in order to reduce the availability of fuel mass. However, the impact of these activities on soil physical and chemical properties varies according to the type of both soil and vegetation and is not fully understood. Therefore, soil monitoring campaigns are often used to measure these impacts. In this paper we have successfully used three statistical data treatments - the Kolmogorov-Smirnov test followed by the ANOVA and the Kruskall-Wallis tests – to investigate the variability among the soil pH, soil moisture, soil organic matter and soil iron variables for different monitoring times and sampling procedures.
Resumo:
Recent and future changes in power systems, mainly in the smart grid operation context, are related to a high complexity of power networks operation. This leads to more complex communications and to higher network elements monitoring and control levels, both from network’s and consumers’ standpoint. The present work focuses on a real scenario of the LASIE laboratory, located at the Polytechnic of Porto. Laboratory systems are managed by the SCADA House Intelligent Management (SHIM), already developed by the authors based on a SCADA system. The SHIM capacities have been recently improved by including real-time simulation from Opal RT. This makes possible the integration of Matlab®/Simulink® real-time simulation models. The main goal of the present paper is to compare the advantages of the resulting improved system, while managing the energy consumption of a domestic consumer.