5 resultados para Friedman, Benny

em Instituto Politécnico do Porto, Portugal


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introdução: Os parâmetros metabólicos durante a marcha normal e a sua regulação são importantes devido ao metabolismo oxidativo ser o principal meio através do qual o organismo humano gera energia para realizar as atividades do quotidiano. Nem sempre a marcha é realizada de forma independente e necessita do apoio de auxiliares de marcha, como o tripé, que tem por função ampliar a base de sustentação e melhorar o equilíbrio. Objetivo: Analisar a influência de utilização de um tripé na marcha, na despesa energética em jovens e idosos saudáveis Métodos: Realizou-se um estudo observacional transversal numa amostra de 21 voluntários, com idade entre 18 a 25 anos e mais ou igual a 60 anos. Realizaram-se as avaliações com o Cosmed K4b2 (Cosmed, Rome, Italy), sendo através do mesmo que os dados foram recolhidos. Foi utilizado o teste de Friedman, com P <0,05. Resultados: Os resultados obtidos para o gasto energético nos jovens foram inferiores aos valores obtidos pelos idosos. Relativamente ao metabolismo energético o substrato energético utilizado pelos jovens foi o proteico e o lipídico pelos idosos. Entre sexos foram os homens quem tiveram um maior gasto energético. Conclusão: O uso do tripé durante a marcha não influencia o gasto energético em adultos jovens e/ou idosos saudáveis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consumer-electronics systems are becoming increasingly complex as the number of integrated applications is growing. Some of these applications have real-time requirements, while other non-real-time applications only require good average performance. For cost-efficient design, contemporary platforms feature an increasing number of cores that share resources, such as memories and interconnects. However, resource sharing causes contention that must be resolved by a resource arbiter, such as Time-Division Multiplexing. A key challenge is to configure this arbiter to satisfy the bandwidth and latency requirements of the real-time applications, while maximizing the slack capacity to improve performance of their non-real-time counterparts. As this configuration problem is NP-hard, a sophisticated automated configuration method is required to avoid negatively impacting design time. The main contributions of this article are: 1) An optimal approach that takes an existing integer linear programming (ILP) model addressing the problem and wraps it in a branch-and-price framework to improve scalability. 2) A faster heuristic algorithm that typically provides near-optimal solutions. 3) An experimental evaluation that quantitatively compares the branch-and-price approach to the previously formulated ILP model and the proposed heuristic. 4) A case study of an HD video and graphics processing system that demonstrates the practical applicability of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com base no modelo de Resposta à Intervenção (RtI), este estudo centrouse em três objetivos: construir um instrumento vocacionado para a determinação do nível de competências fundamentais, do 1º ao 6º anos, na disciplina de Matemática; avaliar o valor preditivo do instrumento sobre a necessidade de intervenção; examinar o efeito de uma intervenção planeada com base na avaliação diagnóstica desse instrumento. Para dar resposta ao primeiro e segundo objetivos foram consideradas duas amostras de conveniência: a primeira, constituída por 5 docentes, avaliou a versão teste do instrumento e a segunda, constituída por 6 docentes, avaliou a sua versão final (perfazendo um total de 75 alunos). Recorrendo ao método kmeans, os resultados mostraram que o instrumento é de útil e fácil aplicação, permitindo aos docentes avaliarem e identificarem o grupo de desempenho a que pertence cada aluno, em relação à média dos resultados da respetiva turma. Relativamente ao terceiro objetivo, foi constituída uma amostra de 7 alunos de uma turma do 4º ano. A intervenção decorreu ao longo de 11 semanas, com 2 sessões semanais, cuja duração variou entre 10 a 35 minutos. Para avaliar os efeitos da intervenção, foi realizado um pré e um pós-teste, assim como 2 sessões de avaliação intermédia (checkpoints), tendo-se recorrido ao teste não paramétrico de Friedman e ao teste de Wilcoxon, para avaliar a significância das diferenças entre os tempos e os níveis de suporte, para o aluno resolver a tarefa com sucesso, respetivamente. Os resultados mostraram diferenças estatiscamente significativas, particularmente entre as duas avaliações intermédia consideradas.