179 resultados para embedded Linux


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study addresses to the optimization of pultrusion manufacturing process from the energy-consumption point of view. The die heating system of external platen heaters commonly used in the pultrusion machines is one of the components that contribute the most to the high consumption of energy of pultrusion process. Hence, instead of the conventional multi-planar heaters, a new internal die heating system that leads to minor heat losses is proposed. The effect of the number and relative position of the embedded heaters along the die is also analysed towards the setting up of the optimum arrangement that minimizes both the energy rate and consumption. Simulation and optimization processes were greatly supported by Finite Element Analysis (FEA) and calibrated with basis on the temperature profile computed through thermography imaging techniques. The main outputs of this study allow to conclude that the use of embedded cylindrical resistances instead of external planar heaters leads to drastic reductions of both the power consumption and the warm-up periods of the die heating system. For the analysed die tool and process, savings on energy consumption up to 60% and warm-up period stages less than an half hour were attained with the new internal heating system. The improvements achieved allow reducing the power requirements on pultrusion process, and thus minimize industrial costs and contribute to a more sustainable pultrusion manufacturing industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

TICEduca. III Congresso Internacional TIC e Educação. 14 a 16 Novembro, Lisboa

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims to design a synthetic construct that mimics the natural bone extracellular matrix through innovative approaches based on simultaneous type I collagen electrospinning and nanophased hydroxyapatite (nanoHA) electrospraying using non-denaturating conditions and non-toxic reagents. The morphological results, assessed using scanning electron microscopy and atomic force microscopy (AFM), showed a mesh of collagen nanofibers embedded with crystals of HA with fiber diameters within the nanometer range (30 nm), thus significantly lower than those reported in the literature, over 200 nm. The mechanical properties, assessed by nanoindentation using AFM, exhibited elastic moduli between 0.3 and 2 GPa. Fourier transformed infrared spectrometry confirmed the collagenous integrity as well as the presence of nanoHA in the composite. The network architecture allows cell access to both collagen nanofibers and HA crystals as in the natural bone environment. The inclusion of nanoHA agglomerates by electrospraying in type I collagen nanofibers improved the adhesion and metabolic activity of MC3T3-E1 osteoblasts. This new nanostructured collagen–nanoHA composite holds great potential for healing bone defects or as a functional membrane for guided bone tissue regeneration and in treating bone diseases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising a constant number (denoted by t) of distinct types of processors—such a platform is referred to as a t-type platform. We present two algorithms, LPGIM and LPGNM, each providing the following guarantee. For a given t-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet their deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then: (i) LPGIM succeeds in finding such an assignment where the same restriction on task migration applies (intra-migrative) but given a platform in which only one processor of each type is 1 + α × t-1/t times faster and (ii) LPGNM succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which every processor is 1 + α times faster. The parameter α is a property of the task set; it is the maximum of all the task utilizations that are no greater than one. To the best of our knowledge, for t-type heterogeneous multiprocessors: (i) for the problem of intra-migrative task assignment, no previous algorithm exists with a proven bound and hence our algorithm, LPGIM, is the first of its kind and (ii) for the problem of non-migrative task assignment, our algorithm, LPGNM, has superior performance compared to state-of-the-art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Energy consumption is one of the major issues for modern embedded systems. Early, power saving approaches mainly focused on dynamic power dissipation, while neglecting the static (leakage) energy consumption. However, technology improvements resulted in a case where static power dissipation increasingly dominates. Addressing this issue, hardware vendors have equipped modern processors with several sleep states. We propose a set of leakage-aware energy management approaches that reduce the energy consumption of embedded real-time systems while respecting the real-time constraints. Our algorithms are based on the race-to-halt strategy that tends to run the system at top speed with an aim to create long idle intervals, which are used to deploy a sleep state. The effectiveness of our algorithms is illustrated with an extensive set of simulations that show an improvement of up to 8% reduction in energy consumption over existing work at high utilization. The complexity of our algorithms is smaller when compared to state-of-the-art algorithms. We also eliminate assumptions made in the related work that restrict the practical application of the respective algorithms. Moreover, a novel study about the relation between the use of sleep intervals and the number of pre-emptions is also presented utilizing a large set of simulation results, where our algorithms reduce the experienced number of pre-emptions in all cases. Our results show that sleep states in general can save up to 30% of the overall number of pre-emptions when compared to the sleep-agnostic earliest-deadline-first algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consider scheduling of real-time tasks on a multiprocessor where migration is forbidden. Specifically, consider the problem of determining a task-to-processor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct types of processors. For this problem, we propose a new algorithm, LPC (task assignment based on solving a Linear Program with Cutting planes). The algorithm offers the following guarantee: for a given task set and a platform, if there exists a feasible task-to-processor assignment, then LPC succeeds in finding such a feasible task-to-processor assignment as well but on a platform in which each processor is 1.5 × faster and has three additional processors. For systems with a large number of processors, LPC has a better approximation ratio than state-of-the-art algorithms. To the best of our knowledge, this is the first work that develops a provably good real-time task assignment algorithm using cutting planes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Bladder cancer is a significant health problem in rural areas of Africa and the Middle East where Schistosoma haematobium is prevalent, supporting an association between malignant transformation and infection by this blood fluke. Nevertheless, the molecular mechanisms linking these events are poorly understood. Bladder cancers in infected populations are generally diagnosed at a late stage since there is a lack of non-invasive diagnostic tools, hence enforcing the need for early carcinogenesis markers. METHODOLOGY/PRINCIPAL FINDINGS: Forty-three formalin-fixed paraffin-embedded bladder biopsies of S. haematobium-infected patients, consisting of bladder tumours, tumour adjacent mucosa and pre-malignant/malignant urothelial lesions, were screened for bladder cancer biomarkers. These included the oncoprotein p53, the tumour proliferation rate (Ki-67>17%), cell-surface cancer-associated glycan sialyl-Tn (sTn) and sialyl-Lewisa/x (sLea/sLex), involved in immune escape and metastasis. Bladder tumours of non-S. haematobium etiology and normal urothelium were used as controls. S. haematobium-associated benign/pre-malignant lesions present alterations in p53 and sLex that were also found in bladder tumors. Similar results were observed in non-S. haematobium associated tumours, irrespectively of their histological nature, denoting some common molecular pathways. In addition, most benign/pre-malignant lesions also expressed sLea. However, proliferative phenotypes were more prevalent in lesions adjacent to bladder tumors while sLea was characteristic of sole benign/pre-malignant lesions, suggesting it may be a biomarker of early carcionogenesis associated with the parasite. A correlation was observed between the frequency of the biomarkers in the tumor and adjacent mucosa, with the exception of Ki-67. Most S. haematobium eggs embedded in the urothelium were also positive for sLea and sLex. Reinforcing the pathologic nature of the studied biomarkers, none was observed in the healthy urothelium. CONCLUSION/SIGNIFICANCE: This preliminary study suggests that p53 and sialylated glycans are surrogate biomarkers of bladder cancerization associated with S. haematobium, highlighting a missing link between infection and cancer development. Eggs of S. haematobium express sLea and sLex antigens in mimicry of human leukocytes glycosylation, which may play a role in the colonization and disease dissemination. These observations may help the early identification of infected patients at a higher risk of developing bladder cancer and guide the future development of non-invasive diagnostic tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of mathematical methods and computer algorithms in the analysis of economic and financial data series aims to give empirical descriptions of the hidden relations between many complex or unknown variables and systems. This strategy overcomes the requirement for building models based on a set of ‘fundamental laws’, which is the paradigm for studying phenomena usual in physics and engineering. In spite of this shortcut, the fact is that financial series demonstrate to be hard to tackle, involving complex memory effects and a apparently chaotic behaviour. Several measures for describing these objects were adopted by market agents, but, due to their simplicity, they are not capable to cope with the diversity and complexity embedded in the data. Therefore, it is important to propose new measures that, on one hand, are highly interpretable by standard personal but, on the other hand, are capable of capturing a significant part of the dynamical effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mobile devices are embedded systems with very limited capacities that need to be considered when developing a client-server application, mainly due to technical, ergonomic and economic implications to the mobile user. With the increasing popularity of mobile computing, many developers have faced problems due to low performance of devices. In this paper, we discuss how to optimize and create client-server applications for in wireless/mobile environments, presenting techniques to improve overall performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O uso da tecnologia tem crescido nas últimas décadas nas mais diversas áreas, seja na indústria ou no dia-a-dia, e é cada vez mais evidente os benefícios que traz. No desporto não é diferente. Cada dia surgem novos desenvolvimentos objetivando a melhoria do desempenho dos praticantes de atividades físicas, possibilitando atingir resultados nunca antes pensados. Além disto, a utilização da tecnologia no desporto permite a obtenção de dados biomecânicos que podem ser utilizados tanto no treinamento quando na melhoria da qualidade de vida dos atletas auxiliando na prevenção de lesões, por exemplo. Deste modo, o presente projeto se aplica na área do desporto, nomeadamente, na modalidade do surfe, onde a ausência de trabalhos científicos ainda é elevada, aliando a tecnologia eletrônica ao desporto para quantificar informações até então desconhecidas. Três fatores básicos de desempenho foram levantados, sendo eles: equilíbrio, posicionamento dos pés e movimentação da prancha de surfe. Estes fatores levaram ao desenvolvimento de um sistema capaz de medi-los dinamicamente através da medição das forças plantares e da rotação da prancha de surfe. Além da medição dos fatores, o sistema é capaz de armazenar os dados adquiridos localmente através de um cartão de memória, para posterior análise; e também enviá-los através de uma comunicação sem fio, permitindo a visualização do centro de pressões plantares; dos ângulos de rotação da prancha de surfe; e da ativação dos sensores; em tempo real. O dispositivo consiste em um sistema eletrônico embarcado composto por um microcontrolador ATMEGA1280; um circuito de aquisição e condicionamento de sinal analógico; uma central inercial; um módulo de comunicação sem fio RN131; e um conjunto de sensores de força Flexiforce. O firmware embarcado foi desenvolvido em linguagem C. O software Matlab foi utilizado para receção de dados e visualização em tempo real. Os testes realizados demostraram que o funcionamento do sistema atende aos requisitos propostos, fornecendo informação acerca do equilíbrio, através do centro de pressões; do posicionamento dos pés, através da distribuição das pressões plantares; e do movimento da prancha nos eixos pitch e roll, através da central inercial. O erro médio de medição de força verificado foi de -0.0012 ± 0.0064 N, enquanto a mínima distância alcançada na transmissão sem fios foi de 100 m. A potência medida do sistema foi de 330 mW.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os osciloscópios digitais são utilizados em diversas áreas do conhecimento, assumindo-se no âmbito da engenharia electrónica, como instrumentos indispensáveis. Graças ao advento das Field Programmable Gate Arrays (FPGAs), os instrumentos de medição reconfiguráveis, dadas as suas vantagens, i.e., altos desempenhos, baixos custos e elevada flexibilidade, são cada vez mais uma alternativa aos instrumentos tradicionalmente usados nos laboratórios. Tendo como objectivo a normalização no acesso e no controlo deste tipo de instrumentos, esta tese descreve o projecto e implementação de um osciloscópio digital reconfigurável baseado na norma IEEE 1451.0. Definido de acordo com uma arquitectura baseada nesta norma, as características do osciloscópio são descritas numa estrutura de dados denominada Transducer Electronic Data Sheet (TEDS), e o seu controlo é efectuado utilizando um conjunto de comandos normalizados. O osciloscópio implementa um conjunto de características e funcionalidades básicas, todas verificadas experimentalmente. Destas, destaca-se uma largura de banda de 575kHz, um intervalo de medição de 0.4V a 2.9V, a possibilidade de se definir um conjunto de escalas horizontais, o nível e declive de sincronismo e o modo de acoplamento com o circuito sob análise. Arquitecturalmente, o osciloscópio é constituído por um módulo especificado com a linguagem de descrição de hardware (HDL, Hardware Description Language) Verilog e por uma interface desenvolvida na linguagem de programação Java®. O módulo é embutido numa FPGA, definindo todo o processamento do osciloscópio. A interface permite o seu controlo e a representação do sinal medido. Durante o projecto foi utilizado um conversor Analógico/Digital (A/D) com uma frequência máxima de amostragem de 1.5MHz e 14 bits de resolução que, devido às suas limitações, obrigaram à implementação de um sistema de interpolação multi-estágio com filtros digitais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As empresas necessitam de ver os seus desempenhos e resultados constantemente melhorados ao nível da qualidade e eficiência. Quando deparamos com a problemática industrial, parece não haver dúvidas de que a metodologia aplicada, a inovação e o capital de investimento a médio e a longo prazo, são decisivos para o crescimento de uma empresa. Das várias estratégias e metodologias que são aplicadas nas empresas, a filosofia Lean surge como uma poderosa metodologia de melhoria de processos. O objetivo deste trabalho assenta na seleção e aplicação de uma das ferramentas da metodologia Lean numa unidade industrial. Foi selecionada a ferramenta 5S para implementação numa empresa de transformação e comercialização de vidro plano. Com a implementação da ferramenta 5S atuou-se sobre os desperdícios, identificando e implementando diversas melhorias no processo. Face às evidentes melhorias observadas, pretende-se alargar o processo a todas as áreas da empresa com vista à melhoria contínua.