951 resultados para Semi-partitioned
Resumo:
13th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC 2015). 21 to 23, Oct, 2015, Session W1-A: Multiprocessing and Multicore Architectures. Porto, Portugal.
Resumo:
Paper/Poster presented in Work in Progress Session, 28th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2015). 24 to 26, Mar, 2015. Porto, Portugal.
Resumo:
Poster presented in Work in Progress Session, 28th GI/ITG International Conference on Architecture of Computing Systems (ARCS 2015). 24 to 26, Mar, 2015. Porto, Portugal.
Resumo:
Presented at IEEE 21st International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2015). 19 to 21, Aug, 2015.
Resumo:
Consider the problem of scheduling a set of sporadic tasks on a multiprocessor system to meet deadlines using a task-splitting scheduling algorithm. Task-splitting (also called semi-partitioning) scheduling algorithms assign most tasks to just one processor but a few tasks are assigned to two or more processors, and they are dispatched in a way that ensures that a task never executes on two or more processors simultaneously. A particular type of task-splitting algorithms, called slot-based task-splitting dispatching, is of particular interest because of its ability to schedule tasks with high processor utilizations. Unfortunately, no slot-based task-splitting algorithm has been implemented in a real operating system so far. In this paper we discuss and propose some modifications to the slot-based task-splitting algorithm driven by implementation concerns, and we report the first implementation of this family of algorithms in a real operating system running Linux kernel version 2.6.34. We have also conducted an extensive range of experiments on a 4-core multicore desktop PC running task-sets with utilizations of up to 88%. The results show that the behavior of our implementation is in line with the theoretical framework behind it.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
The multiprocessor scheduling scheme NPS-F for sporadic tasks has a high utilisation bound and an overall number of preemptions bounded at design time. NPS-F binpacks tasks offline to as many servers as needed. At runtime, the scheduler ensures that each server is mapped to at most one of the m processors, at any instant. When scheduled, servers use EDF to select which of their tasks to run. Yet, unlike the overall number of preemptions, the migrations per se are not tightly bounded. Moreover, we cannot know a priori which task a server will be currently executing at the instant when it migrates. This uncertainty complicates the estimation of cache-related preemption and migration costs (CPMD), potentially resulting in their overestimation. Therefore, to simplify the CPMD estimation, we propose an amended bin-packing scheme for NPS-F allowing us (i) to identify at design time, which task migrates at which instant and (ii) bound a priori the number of migrating tasks, while preserving the utilisation bound of NPS-F.
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
In the Nilo Coelho irrigation scheme, Brazil, the natural vegetation has been replaced by irrigated agriculture, bringing importance for the quantification of the effects on the energy exchanges between the mixed vegetated surfaces and the lower atmosphere. Landsat satellite images and agro-meteorological stations from 1992 to 2011 were used together, for modelling these exchanges. Surface albedo (α0), NDVI and surface temperature (T0) were the basic remote sensing retrieving parameters necessary to calculate the latent heat flux (λE) and the surface resistance to evapotranspiration (rs) on a large scale. The daily net radiation (Rn) was obtained from α0, air temperature (Ta) and short-wave transmissivity (τsw) throughout the slob equation, allowing the quantification of the daily sensible heat flux (H) by residual in the energy balance equation. With a threshold value for rs, it was possible to separate the energy fluxes from crops and natural vegetation. The averaged fractions of Rn partitioned as H and λE, were in average 39 and 67%, respectively. It was observed an increase of the energy used for the evapotranspiration process inside irrigated areas from 51% in 1992 to 80% in 2011, with the ratio λE/Rn presenting an increase of 3 % per year. The tools and models applied in the current research, can subsidize the monitoring of the coupled climate and land use changes effects in irrigation perimeters, being valuable when aiming the sustainability of the irrigated agriculture in the future, avoiding conflicts among different water users. © 2012 SPIE.
Resumo:
Recent experimental evidence has suggested a neuromodulatory deficit in Alzheimer's disease (AD). In this paper, we present a new electroencephalogram (EEG) based metric to quantitatively characterize neuromodulatory activity. More specifically, the short-term EEG amplitude modulation rate-of-change (i.e., modulation frequency) is computed for five EEG subband signals. To test the performance of the proposed metric, a classification task was performed on a database of 32 participants partitioned into three groups of approximately equal size: healthy controls, patients diagnosed with mild AD, and those with moderate-to-severe AD. To gauge the benefits of the proposed metric, performance results were compared with those obtained using EEG spectral peak parameters which were recently shown to outperform other conventional EEG measures. Using a simple feature selection algorithm based on area-under-the-curve maximization and a support vector machine classifier, the proposed parameters resulted in accuracy gains, relative to spectral peak parameters, of 21.3% when discriminating between the three groups and by 50% when mild and moderate-to-severe groups were merged into one. The preliminary findings reported herein provide promising insights that automated tools may be developed to assist physicians in very early diagnosis of AD as well as provide researchers with a tool to automatically characterize cross-frequency interactions and their changes with disease.
Resumo:
The aim of this study was to estimate barite mortar attenuation curves using X-ray spectra weighted by a workload distribution. A semi-empirical model was used for the evaluation of transmission properties of this material. Since ambient dose equivalent, H(⁎)(10), is the radiation quantity adopted by IAEA for dose assessment, the variation of the H(⁎)(10) as a function of barite mortar thickness was calculated using primary experimental spectra. A CdTe detector was used for the measurement of these spectra. The resulting spectra were adopted for estimating the optimized thickness of protective barrier needed for shielding an area in an X-ray imaging facility.
Resumo:
Primary X-ray spectra were measured in the range of 80-150kV in order to validate a computer program based on a semiempirical model. The ratio between the characteristic and total air Kerma was considered to compare computed results and experimental data. Results show that the experimental spectra have higher first HVL and mean energy than the calculated ones. The ratios between the characteristic and total air Kerma for calculated spectra are in good agreement with experimental results for all filtrations used.
Resumo:
Tomato (Solanum lycopersicum) shows three growth habits: determinate, indeterminate and semi-determinate. These are controlled mainly by allelic variation in the SELF-PRUNING (SP) gene family, which also includes the florigen gene SINGLE FLOWER TRUSS (SFT). Determinate cultivars have synchronized flower and fruit production, which allows mechanical harvesting in the tomato processing industry, whereas indeterminate ones have more vegetative growth with continuous flower and fruit formation, being thus preferred for fresh market tomato production. The semi-determinate growth habit is poorly understood, although there are indications that it combines advantages of determinate and indeterminate growth. Here, we used near-isogenic lines (NILs) in the cultivar Micro-Tom (MT) with different growth habit to characterize semi-determinate growth and to determine its impact on developmental and productivity traits. We show that semi-determinate genotypes are equivalent to determinate ones with extended vegetative growth, which in turn impacts shoot height, number of leaves and either stem diameter or internode length. Semi-determinate plants also tend to increase the highly relevant agronomic parameter Brix×ripe yield (BRY). Water-use efficiency (WUE), evaluated either directly as dry mass produced per amount of water transpired or indirectly through C isotope discrimination, was higher in semi-determinate genotypes. We also provide evidence that the increases in BRY in semi-determinate genotypes are a consequence of an improved balance between vegetative and reproductive growth, a mechanism analogous to the conversion of the overly vegetative tall cereal varieties into well-balanced semi-dwarf ones used in the Green Revolution.
Resumo:
Thermosensitive hydrogels were synthesized using alginate-Ca2+ in association with a thermosensitive polymer, such as PNIPAAm. The mechanical properties of the hydrogels were determined measuring the maximum tension of deformation. With the increase of the temperature by 25 to 40 ºC above the LCST the chains of PNIPAAm collapsed, dragging the alginate net and diminishing the size of the pores. The decrease in the size of the pores of the hydrogel was followed by an increase in the mechanicals resistance of the material.