881 resultados para Task Assignment
Resumo:
A presente dissertação fundamenta-se na monitorização e análise de uma “Unidade de Tratamento de Ar” baseada em tecnologia exsicante evaporativa assistida por energia solar. Primeiramente é apresentado um conjunto de conceitos associados ao tema abordado, de modo a facilitar a compreensão dos fenómenos termodinâmicos envolvidos. Após a abordagem inicial que serve de consolidação, é feito o ponto da situação actual, quer em termos tecnológicos, quer em termos de instalações existentes. A etapa seguinte da dissertação descreve pormenorizadamente a instalação do LNEG e seus princípios de funcionamento, nesta fase também é contemplada a descrição sumária dos equipamentos, dos componentes e dos elementos de medição e controlo presentes no sistema. A metodologia seguida assenta fortemente em dois pontos; primeiro, a monitorização do sistema através do software “Agilent VEE Pro” e pela análise dos dados recolhidos com auxílio da folha de cálculo do LNEG, desenvolvida especialmente para este sistema. Em segundo lugar, a metodologia seguida será a imposta pela “Tarefa 38” de acordo com a Agência Internacional de Energia, no âmbito do programa de Aquecimento e Arrefecimento Solar o que permite a comparação de sistemas a nível internacional. Alias a “Tarefa 38” pretende desenvolver esta tecnologia, ao nível da concepção, estandardização, optimização das instalações, conceder o fácil acesso à informação e promover a comparabilidade de resultados. O desempenho global do sistema é bastante positivo, o bom comportamento do sistema é ilustrado na análise gráfica feita nos modos de aquecimento e arrefecimento. O grau de satisfação dos utilizadores é bom e o sistema demonstra capacidade para manter o conforto térmico dos espaços a climatizar de acordo com as normas e regulamentos em vigor. A utilização das energias renováveis nos dias de hoje, como por exemplo a energia solar é mais do que uma obrigação, é um dever.
Resumo:
Opposite enantiomers exhibit different NMR properties in the presence of an external common chiral element, and a chiral molecule exhibits different NMR properties in the presence of external enantiomeric chiral elements. Automatic prediction of such differences, and comparison with experimental values, leads to the assignment of the absolute configuration. Here two cases are reported, one using a dataset of 80 chiral secondary alcohols esterified with (R)-MTPA and the corresponding 1H NMR chemical shifts and the other with 94 13C NMR chemical shifts of chiral secondary alcohols in two enantiomeric chiral solvents. For the first application, counterpropagation neural networks were trained to predict the sign of the difference between chemical shifts of opposite stereoisomers. The neural networks were trained to process the chirality code of the alcohol as the input, and to give the NMR property as the output. In the second application, similar neural networks were employed, but the property to predict was the difference of chemical shifts in the two enantiomeric solvents. For independent test sets of 20 objects, 100% correct predictions were obtained in both applications concerning the sign of the chemical shifts differences. Additionally, with the second dataset, the difference of chemical shifts in the two enantiomeric solvents was quantitatively predicted, yielding r2 0.936 for the test set between the predicted and experimental values.
Resumo:
Dust is a complex mixture of particles of organic and inorganic origin and different gases absorbed in aerosol droplets. In a poultry unit include dried faecal matter and urine, skin flakes, ammonia, carbon dioxide, pollens, feed and litter particles, feathers, grain mites, fungi spores, bacteria, viruses and their constituents. Dust particles vary in size and differentiation between particle size fractions is important in health studies in order to quantify penetration within the respiratory system. A descriptive study was developed in order to assess exposure to particles in a poultry unit during different operations, namely routine examination and floor turn over. Direct-reading equipment was used (Lighthouse, model 3016 IAQ). Particle measurement was performed in 5 different sizes (PM0.5; PM1.0; PM2.5; PM5.0; PM10). The chemical composition of poultry litter was also determined by neutron activation analysis. Normally, the litter of poultry pavilions is turned over weekly and it was during this operation that the higher exposure of particles was observed. In all the tasks considered PM5.0 and PM10.0 were the sizes with higher concentrations values. PM10 is what turns out to have higher values and PM0.5 the lowest values. The chemical element with the highest concentration was Mg (5.7E6 mg.kg-1), followed by K (1.5E4 mg.kg-1), Ca (4.8E3 mg.kg-1), Na (1.7E3 mg.kg-1), Fe (2.1E2 mg.kg-1) and Zn (4.2E1 mg.kg-1). This high presence of particles in the respirable range (<5–7μm) means that poultry dust particles can penetrate into the gas exchange region of the lung. Larger particles (PM10) present a range of concentrations from 5.3E5 and 3.0E6 mg/m3.
Resumo:
OBJECTIVE: To examine the effects of the length and timing of nighttime naps on performance and physiological functions, an experimental study was carried out under simulated night shift schedules. METHODS: Six students were recruited for this study that was composed of 5 experiments. Each experiment involved 3 consecutive days with one night shift (22:00-8:00) followed by daytime sleep and night sleep. The experiments had 5 conditions in which the length and timing of naps were manipulated: 0:00-1:00 (E60), 0:00-2:00 (E120), 4:00-5:00 (L60), 4:00-6:00 (L120), and no nap (No-nap). During the night shifts, participants underwent performance tests. A questionnaire on subjective fatigue and a critical flicker fusion frequency test were administered after the performance tests. Heart rate variability and rectal temperature were recorded continuously during the experiments. Polysomnography was also recorded during the nap. RESULTS: Sleep latency was shorter and sleep efficiency was higher in the nap in L60 and L120 than that in E60 and E120. Slow wave sleep in the naps in E120 and L120 was longer than that in E60 and L60. The mean reaction time in L60 became longer after the nap, and faster in E60 and E120. Earlier naps serve to counteract the decrement in performance and physiological functions during night shifts. Performance was somewhat improved by taking a 2-hour nap later in the shift, but deteriorated after a one-hour nap. CONCLUSIONS: Naps in the latter half of the night shift were superior to earlier naps in terms of sleep quality. However performance declined after a 1-hour nap taken later in the night shift due to sleep inertia. This study suggests that appropriate timing of a short nap must be carefully considered, such as a 60-min nap during the night shift.
Resumo:
Este trabalho descreve as actividades desenvolvidas no âmbito de uma task-force para revitalizar a função Sistemas de Informação de uma grande empresa nacional. Apresenta, em particular, o sistema de indicadores de gestão definido nesse contexto
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
The aim of this study was to analyze the influence of position and pauses on muscle activity and fatigue during the task of ironing. Ten female participants performed the task of ironing in two different positions (standing and sitting) for 10 min each with a 1-min pause at the end of each task. Muscle activity and fatigue from the upper trapezium, anterior deltoid, and pectoralis major were analyzed using surface electromyography. The results showed that the positions had no significant influence on muscle activity; nevertheless, they had significant influence on muscular fatigue. In addition, the pauses were possibly beneficial in decreasing the muscle fatigue, but the results were not conclusive.
Resumo:
The relation between the information/knowledge expression and the physical expression can be involved as one of items for an ambient intelligent computing [2],[3]. Moreover, because there are so many contexts around user/spaces during a user movement, all appplcation which are using AmI for users are based on the relation between user devices and environments. In these situations, it is possible that the AmI may output the wrong result from unreliable contexts by attackers. Recently, establishing a server have been utilizes, so finding secure contexts and make contexts of higher security level for save communication have been given importance. Attackers try to put their devices on the expected path of all users in order to obtain users informationillegally or they may try to broadcast their SPAMS to users. This paper is an extensionof [11] which studies the Security Grade Assignment Model (SGAM) to set Cyber-Society Organization (CSO).
Resumo:
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.
Resumo:
Fieldbus communication networks aim to interconnect sensors, actuators and controllers within distributed computer-controlled systems. Therefore, they constitute the foundation upon which real-time applications are to be implemented. A specific class of fieldbus communication networks is based on a simplified version of token-passing protocols, where each station may transfer, at most, a single message per token visit (SMTV). In this paper, we establish an analogy between non-preemptive task scheduling in single processors and the scheduling of messages on SMTV token-passing networks. Moreover, we clearly show that concepts such as blocking and interference in non-preemptive task scheduling have their counterparts in the scheduling of messages on SMTV token-passing networks. Based on this task/message scheduling analogy, we provide pre-run-time schedulability conditions for supporting real-time messages with SMTV token-passing networks. We provide both utilisation-based and response time tests to perform the pre-run-time schedulability analysis of real-time messages on SMTV token-passing networks, considering RM/DM (rate monotonic/deadline monotonic) and EDF (earliest deadline first) priority assignment schemes
Resumo:
Consider the problem of determining a task-toprocessor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct kinds of processors. We propose a polynomialtime approximation scheme (PTAS) for this problem. It offers the following guarantee: for a given task set and a given platform, if there exists a feasible task-to-processor assignment, then given an input parameter, ϵ, our PTAS succeeds, in polynomial time, in finding such a feasible task-to-processor assignment on a platform in which each processor is 1+3ϵ times faster. In the simulations, our PTAS outperforms the state-of-the-art PTAS [1] and also for the vast majority of task sets, it requires significantly smaller processor speedup than (its upper bound of) 1+3ϵ for successfully determining a feasible task-to-processor assignment.
Resumo:
Dynamic parallel scheduling using work-stealing has gained popularity in academia and industry for its good performance, ease of implementation and theoretical bounds on space and time. Cores treat their own double-ended queues (deques) as a stack, pushing and popping threads from the bottom, but treat the deque of another randomly selected busy core as a queue, stealing threads only from the top, whenever they are idle. However, this standard approach cannot be directly applied to real-time systems, where the importance of parallelising tasks is increasing due to the limitations of multiprocessor scheduling theory regarding parallelism. Using one deque per core is obviously a source of priority inversion since high priority tasks may eventually be enqueued after lower priority tasks, possibly leading to deadline misses as in this case the lower priority tasks are the candidates when a stealing operation occurs. Our proposal is to replace the single non-priority deque of work-stealing with ordered per-processor priority deques of ready threads. The scheduling algorithm starts with a single deque per-core, but unlike traditional work-stealing, the total number of deques in the system may now exceed the number of processors. Instead of stealing randomly, cores steal from the highest priority deque.
Resumo:
A large part of power dissipation in a system is generated by I/O devices. Increasingly these devices provide power saving mechanisms, inter alia to enhance battery life. While I/O device scheduling has been studied in the past for realtime systems, the use of energy resources by these scheduling algorithms may be improved. These approaches are crafted considering a very large overhead of device transitions. Technology enhancements have allowed the hardware vendors to reduce the device transition overhead and energy consumption. We propose an intra-task device scheduling algorithm for real time systems that allows to shut-down devices while ensuring system schedulability. Our results show an energy gain of up to 90% when compared to the techniques proposed in the state-of-the-art.
Resumo:
In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.
Resumo:
Consider the problem of scheduling a set of sporadic tasks on a multiprocessor system to meet deadlines using a task-splitting scheduling algorithm. Task-splitting (also called semi-partitioning) scheduling algorithms assign most tasks to just one processor but a few tasks are assigned to two or more processors, and they are dispatched in a way that ensures that a task never executes on two or more processors simultaneously. A particular type of task-splitting algorithms, called slot-based task-splitting dispatching, is of particular interest because of its ability to schedule tasks with high processor utilizations. Unfortunately, no slot-based task-splitting algorithm has been implemented in a real operating system so far. In this paper we discuss and propose some modifications to the slot-based task-splitting algorithm driven by implementation concerns, and we report the first implementation of this family of algorithms in a real operating system running Linux kernel version 2.6.34. We have also conducted an extensive range of experiments on a 4-core multicore desktop PC running task-sets with utilizations of up to 88%. The results show that the behavior of our implementation is in line with the theoretical framework behind it.