17 resultados para Intra prediction
em Instituto Politécnico do Porto, Portugal
Resumo:
Trabalho de natureza profissional para a atribuição do Título de Especialista do Instituto Politécnico do Porto, na área de Recursos Humanos, defendido a 09-04-2012
Resumo:
The present paper reports the amount and estimated daily mineral intake of nine elements (Ca, Mg, K, Na, P, Fe, Mn, Cr and Ni) in commercial instant coffees and coffee substitutes (n = 49). Elements were quantified by high-resolution continuum source flame (HR-CS-FAAS) and graphite furnace (HR-CS-GFAAS) atomic absorption spectrometry, while phosphorous was evaluated by a standard vanadomolybdophosphoric acid colorimetric method. Instant coffees and coffee substitutes are rich in K, Mg and P (>100 mg/100 g dw), contain Na, Ca and Fe in moderate amounts (>1 mg/100 g), and trace levels of Cr and Ni. Among the samples analysed, plain instant coffees are richer in minerals (p < 0.001), except for Na and Cr. Blends of coffee substitutes (barley, malt, chicory and rye) with coffee (20–66%) present intermediate amounts, while lower quantities are found in substitutes without coffee, particularly in barley. From a nutritional point of view the results indicate that the mean ingestion of two instant beverages per day (total of 4 g instant powder), either with or without coffee, cannot be regarded as important sources of minerals to the human diet, although providing a supplementation of some minerals, particularly Mg and Mn in instant coffees. Additionally, and for authentication purposes, the correlations observed between some elements and the coffee percentage in the blends, with particular significance for Mg amounts, provides a potential tool for the estimation of coffee in substitute blends.
Resumo:
Os sistemas de tempo real modernos geram, cada vez mais, cargas computacionais pesadas e dinâmicas, começando-se a tornar pouco expectável que sejam implementados em sistemas uniprocessador. Na verdade, a mudança de sistemas com um único processador para sistemas multi- processador pode ser vista, tanto no domínio geral, como no de sistemas embebidos, como uma forma eficiente, em termos energéticos, de melhorar a performance das aplicações. Simultaneamente, a proliferação das plataformas multi-processador transformaram a programação paralela num tópico de elevado interesse, levando o paralelismo dinâmico a ganhar rapidamente popularidade como um modelo de programação. A ideia, por detrás deste modelo, é encorajar os programadores a exporem todas as oportunidades de paralelismo através da simples indicação de potenciais regiões paralelas dentro das aplicações. Todas estas anotações são encaradas pelo sistema unicamente como sugestões, podendo estas serem ignoradas e substituídas, por construtores sequenciais equivalentes, pela própria linguagem. Assim, o modo como a computação é na realidade subdividida, e mapeada nos vários processadores, é da responsabilidade do compilador e do sistema computacional subjacente. Ao retirar este fardo do programador, a complexidade da programação é consideravelmente reduzida, o que normalmente se traduz num aumento de produtividade. Todavia, se o mecanismo de escalonamento subjacente não for simples e rápido, de modo a manter o overhead geral em níveis reduzidos, os benefícios da geração de um paralelismo com uma granularidade tão fina serão meramente hipotéticos. Nesta perspetiva de escalonamento, os algoritmos que empregam uma política de workstealing são cada vez mais populares, com uma eficiência comprovada em termos de tempo, espaço e necessidades de comunicação. Contudo, estes algoritmos não contemplam restrições temporais, nem outra qualquer forma de atribuição de prioridades às tarefas, o que impossibilita que sejam diretamente aplicados a sistemas de tempo real. Além disso, são tradicionalmente implementados no runtime da linguagem, criando assim um sistema de escalonamento com dois níveis, onde a previsibilidade, essencial a um sistema de tempo real, não pode ser assegurada. Nesta tese, é descrita a forma como a abordagem de work-stealing pode ser resenhada para cumprir os requisitos de tempo real, mantendo, ao mesmo tempo, os seus princípios fundamentais que tão bons resultados têm demonstrado. Muito resumidamente, a única fila de gestão de processos convencional (deque) é substituída por uma fila de deques, ordenada de forma crescente por prioridade das tarefas. De seguida, aplicamos por cima o conhecido algoritmo de escalonamento dinâmico G-EDF, misturamos as regras de ambos, e assim nasce a nossa proposta: o algoritmo de escalonamento RTWS. Tirando partido da modularidade oferecida pelo escalonador do Linux, o RTWS é adicionado como uma nova classe de escalonamento, de forma a avaliar na prática se o algoritmo proposto é viável, ou seja, se garante a eficiência e escalonabilidade desejadas. Modificar o núcleo do Linux é uma tarefa complicada, devido à complexidade das suas funções internas e às fortes interdependências entre os vários subsistemas. Não obstante, um dos objetivos desta tese era ter a certeza que o RTWS é mais do que um conceito interessante. Assim, uma parte significativa deste documento é dedicada à discussão sobre a implementação do RTWS e à exposição de situações problemáticas, muitas delas não consideradas em teoria, como é o caso do desfasamento entre vários mecanismo de sincronização. Os resultados experimentais mostram que o RTWS, em comparação com outro trabalho prático de escalonamento dinâmico de tarefas com restrições temporais, reduz significativamente o overhead de escalonamento através de um controlo de migrações, e mudanças de contexto, eficiente e escalável (pelo menos até 8 CPUs), ao mesmo tempo que alcança um bom balanceamento dinâmico da carga do sistema, até mesmo de uma forma não custosa. Contudo, durante a avaliação realizada foi detetada uma falha na implementação do RTWS, pela forma como facilmente desiste de roubar trabalho, o que origina períodos de inatividade, no CPU em questão, quando a utilização geral do sistema é baixa. Embora o trabalho realizado se tenha focado em manter o custo de escalonamento baixo e em alcançar boa localidade dos dados, a escalonabilidade do sistema nunca foi negligenciada. Na verdade, o algoritmo de escalonamento proposto provou ser bastante robusto, não falhando qualquer meta temporal nas experiências realizadas. Portanto, podemos afirmar que alguma inversão de prioridades, causada pela sub-política de roubo BAS, não compromete os objetivos de escalonabilidade, e até ajuda a reduzir a contenção nas estruturas de dados. Mesmo assim, o RTWS também suporta uma sub-política de roubo determinística: PAS. A avaliação experimental, porém, não ajudou a ter uma noção clara do impacto de uma e de outra. No entanto, de uma maneira geral, podemos concluir que o RTWS é uma solução promissora para um escalonamento eficiente de tarefas paralelas com restrições temporais.
Resumo:
Three commonly consumed and commercially valuable fish species (sardine, chub and horse mackerel) were collected from the Northeast and Eastern Central Atlantic Ocean in Portuguese waters during one year. Mercury, cadmium, lead and arsenic amounts were determined in muscles using graphite furnace and cold vapour atomic absorption spectrometry. Maximum mean levels of mercury (0.1715 ± 0.0857 mg/kg, ww) and arsenic (1.139 ± 0.350 mg/kg, ww) were detected in horse mackerel. The higher mean amounts of cadmium (0.0084 ± 0.0036 mg/kg, ww) and lead (0.0379 ± 0.0303 mg/kg, ww) were determined in chub mackerel and in sardine, respectively. Intra- and inter-specific variability of metals bioaccumulation was statistically assessed and species and length revealed to be the major influencing biometric factors, in particular for mercury and arsenic. Muscles present metal concentrations below the tolerable limits considered by European Commission Regulation and Food and Agriculture Organization of the United Nations/World Health Organization (FAO/WHO). However, estimation of non-carcinogenic and carcinogenic health risks by the target hazard quotient and target carcinogenic risk, established by the US Environmental Protection Agency, suggests that these species must be eaten in moderation due to possible hazard and carcinogenic risks derived from arsenic (in all analyzed species) and mercury ingestion (in horse and chub mackerel species).
Resumo:
Geostatistics has been successfully used to analyze and characterize the spatial variability of environmental properties. Besides giving estimated values at unsampled locations, it provides a measure of the accuracy of the estimate, which is a significant advantage over traditional methods used to assess pollution. In this work universal block kriging is novelty used to model and map the spatial distribution of salinity measurements gathered by an Autonomous Underwater Vehicle in a sea outfall monitoring campaign, with the aim of distinguishing the effluent plume from the receiving waters, characterizing its spatial variability in the vicinity of the discharge and estimating dilution. The results demonstrate that geostatistical methodology can provide good estimates of the dispersion of effluents that are very valuable in assessing the environmental impact and managing sea outfalls. Moreover, since accurate measurements of the plume’s dilution are rare, these studies might be very helpful in the future to validate dispersion models.
Resumo:
A large part of power dissipation in a system is generated by I/O devices. Increasingly these devices provide power saving mechanisms, inter alia to enhance battery life. While I/O device scheduling has been studied in the past for realtime systems, the use of energy resources by these scheduling algorithms may be improved. These approaches are crafted considering a very large overhead of device transitions. Technology enhancements have allowed the hardware vendors to reduce the device transition overhead and energy consumption. We propose an intra-task device scheduling algorithm for real time systems that allows to shut-down devices while ensuring system schedulability. Our results show an energy gain of up to 90% when compared to the techniques proposed in the state-of-the-art.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform. Each processor is either of type-1 or type-2 with each task having different execution time on each processor type. Jobs can migrate between processors of same type (referred to as intra-type migration) but cannot migrate between processors of different types. We present a new scheduling algorithm namely, LP-Relax(THR) which offers a guarantee that if a task set can be scheduled to meet deadlines by an optimal task assignment scheme that allows intra-type migration then LP-Relax(THR) meets deadlines as well with intra-type migration if given processors 1/THR as fast (referred to as speed competitive ratio) where THR <= 2/3.
Resumo:
Adhesive joints are largely employed nowadays as a fast and effective joining process. The respective techniques for strength prediction have also improved over the years. Cohesive Zone Models (CZM’s) coupled to Finite Element Method (FEM) analyses surpass the limitations of stress and fracture criteria and allow modelling damage. CZM’s require the energy release rates in tension (Gn) and shear (Gs) and respective fracture energies in tension (Gnc) and shear (Gsc). Additionally, the cohesive strengths (tn0 for tension and ts0 for shear) must also be defined. In this work, the influence of the CZM parameters of a triangular CZM used to model a thin adhesive layer is studied, to estimate their effect on the predictions. Some conclusions were drawn for the accuracy of the simulation results by variations of each one of these parameters.
Resumo:
Despite the fact that their physical properties make them an attractive family of materials, composites machining can cause several damage modes such as delamination, fibre pull-out, thermal degradation, and others. Minimization of axial thrust force during drilling reduces the probability of delamination onset, as it has been demonstrated by analytical models based on linear elastic fracture mechanics (LEFM). A finite element model considering solid elements of the ABAQUS® software library and interface elements including a cohesive damage model was developed in order to simulate thrust forces and delamination onset during drilling. Thrust force results for delamination onset are compared with existing analytical models.
Resumo:
This work reports on the experimental and numerical study of the bending behaviour of two-dimensional adhesively-bonded scarf repairs of carbon-epoxy laminates, bonded with the ductile adhesive Araldite 2015®. Scarf angles varying from 2 to 45º were tested. The experimental work performed was used to validate a numerical Finite Element analysis using ABAQUS® and a methodology developed by the authors to predict the strength of bonded assemblies. This methodology consists on replacing the adhesive layer by cohesive elements, including mixed-mode criteria to deal with the mixed-mode behaviour usually observed in structures. Trapezoidal laws in pure modes I and II were used to account for the ductility of the adhesive used. The cohesive laws in pure modes I and II were determined with Double Cantilever Beam and End-Notched Flexure tests, respectively, using an inverse method. Since in the experiments interlaminar and transverse intralaminar failures of the carbon-epoxy components also occurred in some regions, cohesive laws to simulate these failure modes were also obtained experimentally with a similar procedure. A good correlation with the experiments was found on the elastic stiffness, maximum load and failure mode of the repairs, showing that this methodology simulates accurately the mechanical behaviour of bonded assemblies.
Resumo:
Polyolefins are especially difficult to bond due to their non-polar, non-porous and chemically inert surfaces. Acrylic adhesives used in industry are particularly suited to bond these materials, including many grades of polypropylene (PP) and polyethylene (PE), without special surface preparation. In this work, the tensile strength of single-lap PE and mixed joints bonded with an acrylic adhesive was investigated. The mixed joints included PE with aluminium (AL) or carbon fibre reinforced plastic (CFRP) substrates. The PE substrates were only cleaned with isopropanol, which assured cohesive failures. For the PE CFRP joints, three different surfaces preparations were employed for the CFRP substrates: cleaning with acetone, abrasion with 100 grit sand paper and peel-ply finishing. In the PE AL joints, the AL bonding surfaces were prepared by the following methods: cleaning with acetone, abrasion with 180 and 320 grit sand papers, grit blasting and chemical etching with chromic acid. After abrasion of the CFRP and AL substrates, the surfaces were always cleaned with acetone. The tensile strengths were compared with numerical results from ABAQUS® and a mixed mode (I+II) cohesive damage model. A good agreement was found between the experimental and numerical results, except for the PE AL joints, since the AL surface treatments were not found to be effective.
Resumo:
The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.
Resumo:
An ever increasing need for extra functionality in a single embedded system demands for extra Input/Output (I/O) devices, which are usually connected externally and are expensive in terms of energy consumption. To reduce their energy consumption, these devices are equipped with power saving mechanisms. While I/O device scheduling for real-time (RT) systems with such power saving features has been studied in the past, the use of energy resources by these scheduling algorithms may be improved. Technology enhancements in the semiconductor industry have allowed the hardware vendors to reduce the device transition and energy overheads. The decrease in overhead of sleep transitions has opened new opportunities to further reduce the device energy consumption. In this research effort, we propose an intra-task device scheduling algorithm for real-time systems that wakes up a device on demand and reduces its active time while ensuring system schedulability. This intra-task device scheduling algorithm is extended for devices with multiple sleep states to further minimise the overall device energy consumption of the system. The proposed algorithms have less complexity when compared to the conservative inter-task device scheduling algorithms. The system model used relaxes some of the assumptions commonly made in the state-of-the-art that restrict their practical relevance. Apart from the aforementioned advantages, the proposed algorithms are shown to demonstrate the substantial energy savings.
Resumo:
This article investigates the limit cycle (LC) prediction of systems with backlash by means of the describing function (DF) when using discrete fractional-order (FO) algorithms. The DF is an approximate method that gives good estimates of LCs. The implementation of FO controllers requires the use of rational approximations, but such realizations produce distinct dynamic types of behavior. This study analyzes the accuracy in the prediction of LCs, namely their amplitude and frequency, when using several different algorithms. To illustrate this problem we use FO-PID algorithms in the control of systems with backlash.