887 resultados para task analysis
Resumo:
Objective: To systematically review the evidence examining effects of walking interventions on pain and self-reported function in individuals with chronic musculoskeletal pain.
Data Sources: Six electronic databases (Medline, CINAHL, PsychINFO, PEDro, Sport Discus and the Cochrane Central Register of Controlled Trials) were searched from January 1980 up to March 2014.
Study Selection: Randomized and quasi-randomized controlled trials in adults with chronic low back pain, osteoarthritis or fibromyalgia comparing walking interventions to a non-exercise or non-walking exercise control group.
Data Extraction: Data were independently extracted using a standardized form. Methodological quality was assessed using the United States Preventative Services Task Force (USPSTF) system.
Data Synthesis: Twenty-six studies (2384 participants) were included and suitable data from 17 were pooled for meta-analysis with a random effects model used to calculate between group mean differences and 95% confidence intervals. Data were analyzed according to length of follow-up (short-term: ≤8 weeks post randomization; medium-term: >2 months - 12 months; long-term: > 12 months). Interventions were associated with small to moderate improvements in pain at short (mean difference (MD) -5.31, 95% confidence interval (95% CI) -8.06 to -2.56) and medium-term follow-up (MD -7.92, 95% CI -12.37 to -3.48). Improvements in function were observed at short (MD -6.47, 95% CI -12.00 to -0.95), medium (MD -9.31, 95% CI -14.00 to -4.61) and long-term follow-up (MD -5.22, 95% CI 7.21 to -3.23).
Conclusions: Evidence of fair methodological quality suggests that walking is associated with significant improvements in outcome compared to control interventions but longer-term effectiveness is uncertain. Using the USPSTF system, walking can be recommended as an effective form of exercise or activity for individuals with chronic musculoskeletal pain but should be supplemented with strategies aimed at maintaining participation. Further work is also required examining effects on important health related outcomes in this population in robustly designed studies.
Resumo:
We present the results of exploratory experiments using lexical valence extracted from brain using electroencephalography (EEG) for sentiment analysis. We selected 78 English words (36 for training and 42 for testing), presented as stimuli to 3 English native speakers. EEG signals were recorded from the subjects while they performed a mental imaging task for each word stimulus. Wavelet decomposition was employed to extract EEG features from the time-frequency domain. The extracted features were used as inputs to a sparse multinomial logistic regression (SMLR) classifier for valence classification, after univariate ANOVA feature selection. After mapping EEG signals to sentiment valences, we exploited the lexical polarity extracted from brain data for the prediction of the valence of 12 sentences taken from the SemEval-2007 shared task, and compared it against existing lexical resources.
Resumo:
This paper is prompted by the widespread acceptance that the rates of inter-county and inter-state migration have been falling in the USA and sets itself the task of examining whether this decline in migration intensities is also the case in the UK. It uses annual inter-area migration matrices available for England and Wales since the 1970s by broad age group. The main methodological challenge, arising from changes in the geography of health areas for which the inter-area flows are given, is addressed by adopting the lowest common denominator of 80 areas. Care is also taken to allow for the effect of economic cycles in producing short-term fluctuations on migration rates and to isolate the effect of a sharp rise in rates for 16-24 year olds in the 1990s, which is presumed to be related to the expansion of higher education. The findings suggest that, unlike for the USA, there has not been a substantial decline in the intensity of internal migration between the first two decades of the study period and the second two. If there has been any major decline in the intensity of address changing in England and Wales, it can only be for the within-area moves that this time series does not cover. This latter possibility is examined in a companion paper using a very different data set (Champion and Shuttleworth, 2016).
Resumo:
BACKGROUND: The task of revising dietary folate recommendations for optimal health is complicated by a lack of data quantifying the biomarker response that reliably reflects a given folate intake.
OBJECTIVE: We conducted a dose-response meta-analysis in healthy adults to quantify the typical response of recognized folate biomarkers to a change in folic acid intake.
DESIGN: Electronic and bibliographic searches identified 19 randomized controlled trials that supplemented with folic acid and measured folate biomarkers before and after the intervention in apparently healthy adults aged ≥18 y. For each biomarker response, the regression coefficient (β) for individual studies and the overall pooled β were calculated by using random-effects meta-analysis.
RESULTS: Folate biomarkers (serum/plasma and red blood cell folate) increased in response to folic acid in a dose-response manner only up to an intake of 400 μg/d. Calculation of the overall pooled β for studies in the range of 50 to 400 μg/d indicated that a doubling of folic acid intake resulted in an increase in serum/plasma folate by 63% (71% for microbiological assay; 61% for nonmicrobiological assay) and red blood cell folate by 31% (irrespective of whether microbiological or other assay was used). Studies that used the microbiological assay indicated lower heterogeneity compared with studies using nonmicrobiological assays for determining serum/plasma (I(2) = 13.5% compared with I(2) = 77.2%) and red blood cell (I(2) = 45.9% compared with I(2) = 70.2%) folate.
CONCLUSIONS: Studies administering >400 μg folic acid/d show no dose-response relation and thus will not yield meaningful results for consideration when generating dietary folate recommendations. The calculated folate biomarker response to a given folic acid intake may be more robust with the use of a microbiological assay rather than alternative methods for blood folate measurement.
Resumo:
Resumo:
Senior thesis written for Oceanography 445
Resumo:
Thesis (Ph.D.)--University of Washington, 2015
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with a smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, little work has been made to illuminate its characteristics upon multiprocessor platforms. In this paper, we identify the dynamics of laxity from the system’s viewpoint and translate the dynamics into LLF multiprocessor schedulability analysis. More specifically, we first characterize laxity properties under LLF scheduling, focusing on laxity dynamics associated with a deadline miss. These laxity dynamics describe a lower bound, which leads to the deadline miss, on the number of tasks of certain laxity values at certain time instants. This lower bound is significant because it represents invariants for highly dynamic system parameters (laxity values). Since the laxity of a task is dependent of the amount of interference of higher-priority tasks, we can then derive a set of conditions to check whether a given task system can go into the laxity dynamics towards a deadline miss. This way, to the author’s best knowledge, we propose the first LLF multiprocessor schedulability test based on its own laxity properties. We also develop an improved schedulability test that exploits slack values. We mathematically prove that the proposed LLF tests dominate the state-of-the-art EDZL tests. We also present simulation results to evaluate schedulability performance of both the original and improved LLF tests in a quantitative manner.
Resumo:
In real-time systems, there are two distinct trends for scheduling task sets on unicore systems: non-preemptive and preemptive scheduling. Non-preemptive scheduling is obviously not subject to any preemption delay but its schedulability may be quite poor, whereas fully preemptive scheduling is subject to preemption delay, but benefits from a higher flexibility in the scheduling decisions. The time-delay involved by task preemptions is a major source of pessimism in the analysis of the task Worst-Case Execution Time (WCET) in real-time systems. Preemptive scheduling policies including non-preemptive regions are a hybrid solution between non-preemptive and fully preemptive scheduling paradigms, which enables to conjugate both world's benefits. In this paper, we exploit the connection between the progression of a task in its operations, and the knowledge of the preemption delays as a function of its progression. The pessimism in the preemption delay estimation is then reduced in comparison to state of the art methods, due to the increase in information available in the analysis.
Resumo:
The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Resumo:
LLF (Least Laxity First) scheduling, which assigns a higher priority to a task with smaller laxity, has been known as an optimal preemptive scheduling algorithm on a single processor platform. However, its characteristics upon multiprocessor platforms have been little studied until now. Orthogonally, it has remained open how to efficiently schedule general task systems, including constrained deadline task systems, upon multiprocessors. Recent studies have introduced zero laxity (ZL) policy, which assigns a higher priority to a task with zero laxity, as a promising scheduling approach for such systems (e.g., EDZL). Towards understanding the importance of laxity in multiprocessor scheduling, this paper investigates the characteristics of ZL policy and presents the first ZL schedulability test for any work-conserving scheduling algorithm that employs this policy. It then investigates the characteristics of LLF scheduling, which also employs the ZL policy, and derives the first LLF-specific schedulability test on multiprocessors. It is shown that the proposed LLF test dominates the ZL test as well as the state-of-art EDZL test.
Resumo:
Consider the problem of scheduling a set of sporadically arriving tasks on a uniform multiprocessor with the goal of meeting deadlines. A processor p has the speed Sp. Tasks can be preempted but they cannot migrate between processors. We propose an algorithm which can schedule all task sets that any other possible algorithm can schedule assuming that our algorithm is given processors that are three times faster.
Resumo:
Consider the problem of scheduling a set of sporadically arriving tasks on a uniform multiprocessor with the goal of meeting deadlines. A processor p has the speed Sp. Tasks can be preempted but they cannot migrate between processors. On each processor, tasks are scheduled according to rate-monotonic. We propose an algorithm that can schedule all task sets that any other possible algorithm can schedule assuming that our algorithm is given processors that are √2 / √2−1 ≈ 3.41 times faster. No such guarantees are previously known for partitioned static-priority scheduling on uniform multiprocessors.
Resumo:
A new algorithm is proposed for scheduling preemptible arbitrary-deadline sporadic task systems upon multiprocessor platforms, with interprocessor migration permitted. This algorithm is based on a task-splitting approach - while most tasks are entirely assigned to specific processors, a few tasks (fewer than the number of processors) may be split across two processors. This algorithm can be used for two distinct purposes: for actually scheduling specific sporadic task systems, and for feasibility analysis. Simulation- based evaluation indicates that this algorithm offers a significant improvement on the ability to schedule arbitrary- deadline sporadic task systems as compared to the contemporary state-of-art. With regard to feasibility analysis, the new algorithm is proved to offer superior performance guarantees in comparison to prior feasibility tests.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.