910 resultados para planar graph


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Graphs are powerful tools to describe social, technological and biological networks, with nodes representing agents (people, websites, gene, etc.) and edges (or links) representing relations (or interactions) between agents. Examples of real-world networks include social networks, the World Wide Web, collaboration networks, protein networks, etc. Researchers often model these networks as random graphs. In this dissertation, we study a recently introduced social network model, named the Multiplicative Attribute Graph model (MAG), which takes into account the randomness of nodal attributes in the process of link formation (i.e., the probability of a link existing between two nodes depends on their attributes). Kim and Lesckovec, who defined the model, have claimed that this model exhibit some of the properties a real world social network is expected to have. Focusing on a homogeneous version of this model, we investigate the existence of zero-one laws for graph properties, e.g., the absence of isolated nodes, graph connectivity and the emergence of triangles. We obtain conditions on the parameters of the model, so that these properties occur with high or vanishingly probability as the number of nodes becomes unboundedly large. In that regime, we also investigate the property of triadic closure and the nodal degree distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Verbal fluency is the ability to produce a satisfying sequence of spoken words during a given time interval. The core of verbal fluency lies in the capacity to manage the executive aspects of language. The standard scores of the semantic verbal fluency test are broadly used in the neuropsychological assessment of the elderly, and different analytical methods are likely to extract even more information from the data generated in this test. Graph theory, a mathematical approach to analyze relations between items, represents a promising tool to understand a variety of neuropsychological states. This study reports a graph analysis of data generated by the semantic verbal fluency test by cognitively healthy elderly (NC), patients with Mild Cognitive Impairment – subtypes amnestic(aMCI) and amnestic multiple domain (a+mdMCI) - and patients with Alzheimer’s disease (AD). Sequences of words were represented as a speech graph in which every word corresponded to a node and temporal links between words were represented by directed edges. To characterize the structure of the data we calculated 13 speech graph attributes (SGAs). The individuals were compared when divided in three (NC – MCI – AD) and four (NC – aMCI – a+mdMCI – AD) groups. When the three groups were compared, significant differences were found in the standard measure of correct words produced, and three SGA: diameter, average shortest path, and network density. SGA sorted the elderly groups with good specificity and sensitivity. When the four groups were compared, the groups differed significantly in network density, except between the two MCI subtypes and NC and aMCI. The diameter of the network and the average shortest path were significantly different between the NC and AD, and between aMCI and AD. SGA sorted the elderly in their groups with good specificity and sensitivity, performing better than the standard score of the task. These findings provide support for a new methodological frame to assess the strength of semantic memory through the verbal fluency task, with potential to amplify the predictive power of this test. Graph analysis is likely to become clinically relevant in neurology and psychiatry, and may be particularly useful for the differential diagnosis of the elderly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional Si complementary-metal-oxide-semiconductor (CMOS) scaling is fast approaching its limits. The extension of the logic device roadmap for future enhancements in transistor performance requires non-Si materials and new device architectures. III-V materials, due to their superior electron transport properties, are well poised to replace Si as the channel material beyond the 10nm technology node to mitigate the performance loss of Si transistors from further reductions in supply voltage to minimise power dissipation in logic circuits. However several key challenges, including a high quality dielectric/III-V gate stack, a low-resistance source/drain (S/D) technology, heterointegration onto a Si platform and a viable III-V p-metal-oxide-semiconductor field-effect-transistor (MOSFET), need to be addressed before III-Vs can be employed in CMOS. This Thesis specifically addressed the development and demonstration of planar III-V p-MOSFETs, to complement the n-MOSFET, thereby enabling an all III-V CMOS technology to be realised. This work explored the application of InGaAs and InGaSb material systems as the channel, in conjunction with Al2O3/metal gate stacks, for p-MOSFET development based on the buried-channel flatband device architecture. The body of work undertaken comprised material development, process module development and integration into a robust fabrication flow for the demonstration of p-channel devices. The parameter space in the design of the device layer structure, based around the III-V channel/barrier material options of Inx≥0.53Ga1-xAs/In0.52Al0.48As and Inx≥0.1Ga1-xSb/AlSb, was systematically examined to improve hole channel transport. A mobility of 433 cm2/Vs, the highest room temperature hole mobility of any InGaAs quantum-well channel reported to date, was obtained for the In0.85Ga0.15As (2.1% strain) structure. S/D ohmic contacts were developed based on thermally annealed Au/Zn/Au metallisation and validated using transmission line model test structures. The effects of metallisation thickness, diffusion barriers and de-oxidation conditions were examined. Contacts to InGaSb-channel structures were found to be sensitive to de-oxidation conditions. A fabrication process, based on a lithographically-aligned double ohmic patterning approach, was realised for deep submicron gate-to-source/drain gap (Lside) scaling to minimise the access resistance, thereby mitigating the effects of parasitic S/D series resistance on transistor performance. The developed process yielded gaps as small as 20nm. For high-k integration on GaSb, ex-situ ammonium sulphide ((NH4)2S) treatments, in the range 1%-22%, for 10min at 295K were systematically explored for improving the electrical properties of the Al2O3/GaSb interface. Electrical and physical characterisation indicated the 1% treatment to be most effective with interface trap densities in the range of 4 - 10×1012cm-2eV-1 in the lower half of the bandgap. An extended study, comprising additional immersion times at each sulphide concentration, was further undertaken to determine the surface roughness and the etching nature of the treatments on GaSb. A number of p-MOSFETs based on III-V-channels with the most promising hole transport and integration of the developed process modules were successfully demonstrated in this work. Although the non-inverted InGaAs-channel devices showed good current modulation and switch-off characteristics, several aspects of performance were non-ideal; depletion-mode operation, modest drive current (Id,sat=1.14mA/mm), double peaked transconductance (gm=1.06mS/mm), high subthreshold swing (SS=301mV/dec) and high on-resistance (Ron=845kΩ.μm). Despite demonstrating substantial improvement in the on-state metrics of Id,sat (11×), gm (5.5×) and Ron (5.6×), inverted devices did not switch-off. Scaling gate-to-source/drain gap (Lside) from 1μm down to 70nm improved Id,sat (72.4mA/mm) by a factor of 3.6 and gm (25.8mS/mm) by a factor of 4.1 in inverted InGaAs-channel devices. Well-controlled current modulation and good saturation behaviour was observed for InGaSb-channel devices. In the on-state In0.3Ga0.7Sb-channel (Id,sat=49.4mA/mm, gm=12.3mS/mm, Ron=31.7kΩ.μm) and In0.4Ga0.6Sb-channel (Id,sat=38mA/mm, gm=11.9mS/mm, Ron=73.5kΩ.μm) devices outperformed the InGaAs-channel devices. However the devices could not be switched off. These findings indicate that III-V p-MOSFETs based on InGaSb as opposed to InGaAs channels are more suited as the p-channel option for post-Si CMOS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). Simulated and real experiments have been performed to compare our approach with two prominent scan matchers and with wheel odometry. Quantitative and qualitative results demonstrate the superior performance of our approach which, along with its very low computational cost (0.9 milliseconds on a single CPU core), makes it suitable for those robotic applications that require planar odometry. For this purpose, we also provide the code so that the robotics community can benefit from it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A weighted Bethe graph $B$ is obtained from a weighted generalized Bethe tree by identifying each set of children with the vertices of a graph belonging to a family $F$ of graphs. The operation of identifying the root vertex of each of $r$ weighted Bethe graphs to the vertices of a connected graph $\mathcal{R}$ of order $r$ is introduced as the $\mathcal{R}$-concatenation of a family of $r$ weighted Bethe graphs. It is shown that the Laplacian eigenvalues (when $F$ has arbitrary graphs) as well as the signless Laplacian and adjacency eigenvalues (when the graphs in $F$ are all regular) of the $\mathcal{R}$-concatenation of a family of weighted Bethe graphs can be computed (in a unified way) using the stable and low computational cost methods available for the determination of the eigenvalues of symmetric tridiagonal matrices. Unlike the previous results already obtained on this topic, the more general context of families of distinct weighted Bethe graphs is herein considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The presence of gap junction coupling among neurons of the central nervous systems has been appreciated for some time now. In recent years there has been an upsurge of interest from the mathematical community in understanding the contribution of these direct electrical connections between cells to large-scale brain rhythms. Here we analyze a class of exactly soluble single neuron models, capable of producing realistic action potential shapes, that can be used as the basis for understanding dynamics at the network level. This work focuses on planar piece-wise linear models that can mimic the firing response of several different cell types. Under constant current injection the periodic response and phase response curve (PRC) is calculated in closed form. A simple formula for the stability of a periodic orbit is found using Floquet theory. From the calculated PRC and the periodic orbit a phase interaction function is constructed that allows the investigation of phase-locked network states using the theory of weakly coupled oscillators. For large networks with global gap junction connectivity we develop a theory of strong coupling instabilities of the homogeneous, synchronous and splay state. For a piece-wise linear caricature of the Morris-Lecar model, with oscillations arising from a homoclinic bifurcation, we show that large amplitude oscillations in the mean membrane potential are organized around such unstable orbits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Planar cell polarity (PCP) occurs in the epithelia of many animals and can lead to the alignment of hairs, bristles and feathers; physiologically, it can organise ciliary beating. Here we present two approaches to modelling this phenomenon. The aim is to discover the basic mechanisms that drive PCP, while keeping the models mathematically tractable. We present a feedback and diffusion model, in which adjacent cell sides of neighbouring cells are coupled by a negative feedback loop and diffusion acts within the cell. This approach can give rise to polarity, but also to period two patterns. Polarisation arises via an instability provided a sufficiently strong feedback and sufficiently weak diffusion. Moreover, we discuss a conservative model in which proteins within a cell are redistributed depending on the amount of proteins in the neighbouring cells, coupled with intracellular diffusion. In this case polarity can arise from weakly polarised initial conditions or via a wave provided the diffusion is weak enough. Both models can overcome small anomalies in the initial conditions. Furthermore, the range of the effects of groups of cells with different properties than the surrounding cells depends on the strength of the initial global cue and the intracellular diffusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução: A cintigrafia óssea é um dos exames mais frequentes em Medicina Nuclear. Esta modalidade de imagem médica requere um balanço apropriado entre a qualidade de imagem e a dose de radiação, ou seja, as imagens obtidas devem conter o número mínimo de contagem necessárias, para que apresentem qualidade considerada suficiente para fins diagnósticos. Objetivo: Este estudo tem como principal objetivo, a aplicação do software Enhanced Planar Processing (EPP), nos exames de cintigrafia óssea em doentes com carcinoma da mama e próstata que apresentam metástases ósseas. Desta forma, pretende-se avaliar a performance do algoritmo EPP na prática clínica em termos de qualidade e confiança diagnóstica quando o tempo de aquisição é reduzido em 50%. Material e Métodos: Esta investigação teve lugar no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre. Cinquenta e um doentes com suspeita de metástases ósseas foram administrados com 500MBq de metilenodifosfonato marcado com tecnécio-99m. Cada doente foi submetido a duas aquisições de imagem, sendo que na primeira foi seguido o protocolo standard do departamento (scan speed=8 cm/min) e na segunda, o tempo de aquisição foi reduzido para metade (scan speed=16 cm/min). As imagens adquiridas com o segundo protocolo foram processadas com o algoritmo EPP. Todas as imagens foram submetidas a uma avaliação objetiva e subjetiva. Relativamente à análise subjetiva, três médicos especialistas em Medicina Nuclear avaliaram as imagens em termos da detetabilidade das lesões, qualidade de imagem, aceitabilidade diagnóstica, localização das lesões e confiança diagnóstica. No que respeita à avaliação objetiva, foram selecionadas duas regiões de interesse, uma localizada no terço médio do fémur e outra localizada nos tecidos moles adjacentes, de modo a obter os valores de relação sinal-ruído, relação contraste-ruído e coeficiente de variação. Resultados: Os resultados obtidos evidenciam que as imagens processadas com o software EPP oferecem aos médicos suficiente informação diagnóstica na deteção de metástases, uma vez que não foram encontradas diferenças estatisticamente significativas (p>0.05). Para além disso, a concordância entre os observadores, comparando essas imagens e as imagens adquiridas com o protocolo standard foi de 95% (k=0.88). Por outro lado, no que respeita à qualidade de imagem, foram encontradas diferenças estatisticamente significativas quando se compararam as modalidades de imagem entre si (p≤0.05). Relativamente à aceitabilidade diagnóstica, não foram encontradas diferenças estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP software (p>0.05), verificando-se uma concordância entre os observadores de 70.6%. Todavia, foram encontradas diferenças estatisticamente significativas entre as imagens adquiridas com o protocolo standard e as imagens adquiridas com o segundo protocolo e não processadas com o software EPP (p≤0.05). Para além disso, não foram encontradas diferenças estatisticamente significativas (p>0.05) em termos de relação sinal-ruído, relação contraste-ruído e coeficiente de variação entre as imagens adquiridas com o protocolo standard e as imagens processadas com o EPP. Conclusão: Com os resultados obtidos através deste estudo, é possível concluir que o algoritmo EPP, desenvolvido pela Siemens, oferece a possibilidade de reduzir o tempo de aquisição em 50%, mantendo ao mesmo tempo uma qualidade de imagem considerada suficiente para fins de diagnóstico. A utilização desta tecnologia, para além de aumentar a satisfação por parte dos doentes, é bastante vantajosa no que respeita ao workflow do departamento.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A cintigrafia óssea de corpo inteiro representa um dos exames imagiológicos mais frequentes realizados em medicina nuclear. Para além de outras aplicações, este procedimento é capaz de fornecer o diagnóstico de metástases ósseas. Em doentes oncológicos, a presença de metástases ósseas representa um forte indicador prognóstico da longevidade do doente. Para além disso, a presença ou ausência de metástases ósseas irá influenciar o planeamento do tratamento, requerendo para isso uma interpretação precisa dos resultados imagiológicos. Problema: Tendo em conta que a metastização óssea é considerada uma complicação severa relacionada com aumento da morbilidade e diminuição de sobrevivência dos doentes, o conceito de patient care torna-se ainda mais imperativo nestas situações. Assim, devem ser implementadas as melhores práticas imagiológicas de forma a obter o melhor resultado possível do procedimento efetuado, associado ao desconforto mínimo do doente. Uma técnica provável para atingir este objetivo no caso específico da cintigrafia óssea de corpo inteiro é a redução do tempo de aquisição, contudo, as imagens obtidas por si só teriam qualidade de tal forma reduzida que os resultados poderiam ser enviesados. Atualmente, surgiram novas técnicas, nomeadamente relativas a processamento de imagem, através das quais é possível gerar imagens cintigráficas com contagem reduzida de qualidade comparável àquela obtida com o protocolo considerado como standard. Ainda assim, alguns desses métodos continuam associados a algumas incertezas, particularmente no que respeita a sustentação da confiança diagnóstica após a modificação dos protocolos de rotina. Objetivos: O presente trabalho pretende avaliar a performance do algoritmo Pixon para processamento de imagem por meio de um estudo com fantoma. O objetivo será comparar a qualidade de imagem e a detetabilidade fornecidas por imagens não processadas com aquelas submetidas à referida técnica de processamento. Para além disso, pretende-se também avaliar o efeito deste algoritmo na redução do tempo de aquisição. De forma a atingir este objetivo, irá ser feita uma comparação entre as imagens obtidas com o protocolo standard e aquelas adquiridas usando protocolos mais rápidos, posteriormente submetidas ao método de processamento referido. Material e Métodos: Esta investigação for realizada no departamento de Radiologia e Medicina Nuclear do Radboud University Nijmegen Medical Centre, situado na Holanda. Foi utilizado um fantoma cilíndrico contendo um conjunto de seis esferas de diferentes tamanhos, adequado à técnica de imagem planar. O fantoma foi preparado com diferentes rácios de atividade entre as esferas e o background (4:1, 8:1, 17:1, 22:1, 32:1 e 71:1). Posteriormente, para cada teste experimental, o fantoma foi submetido a vários protocolos de aquisição de imagem, nomeadamente com diferentes velocidades de aquisição: 8 cm/min, 12 cm/min, 16 cm/min e 20 cm/min. Todas as imagens foram adquiridas na mesma câmara gama - e.cam Signature Dual Detector System (Siemens Medical Solutions USA, Inc.) - utilizando os mesmos parâmetros técnicos de aquisição, à exceção da velocidade. Foram adquiridas 24 imagens, todas elas submetidas a pós-processamento com recurso a um software da Siemens (Siemens Medical Solutions USA, Inc.) que inclui a ferramenta necessária ao processamento de imagens cintigráficas de corpo inteiro. Os parâmetros de reconstrução utilizados foram os mesmos para cada série de imagens, estando estabelecidos em modo automático. A análise da informação recolhida foi realizada com recurso a uma avaliação objetiva (utilizando parâmetros físicos de qualidade de imagem) e outra subjetiva (através de dois observadores). A análise estatística foi efetuada recorrendo ao software SPSS versão 22 para Windows. Resultados: Através da análise subjetiva de cada rácio de atividade foi demonstrado que, no geral, a detetabilidade das esferas aumentou após as imagens serem processadas. A concordância entre observadores para a distribuição desta análise foi substancial, tanto para imagens não processadas como imagens processadas. Foi igualmente demonstrado que os parâmetros físicos de qualidade de imagem progrediram depois de o algoritmo de processamento ter sido aplicado. Para além disso, observou-se ao comparar as imagens standard (adquiridas com 8 cm/min) e aquelas processadas e adquiridas com protocolos mais rápidos que: imagens adquiridas com uma velocidade de aquisição de 12 cm/min podem fornecer resultados melhorados, com parâmetros de qualidade de imagem e detetabilidade superiores; imagens adquiridas com uma velocidade de 16 cm/min fornecem resultados comparáveis aos standard, com valores aproximados de qualidade de imagem e detetabilidade; e imagens adquiridas com uma velocidade de 20 cm/min resultam em valores diminuídos de qualidade de imagem, bem como redução a nível da detetabilidade. Discussão: Os resultados obtidos foram igualmente estabelecidos por meio de um estudo clínico numa investigação independente, no mesmo departamento. Foram incluídos cinquenta e um doentes referidos com carcinomas da mama e da próstata, com o objetivo de estudar o impacto desta técnica na prática clínica. Os doentes foram, assim, submetidos ao protocolo standard e posteriormente a uma aquisição adicional com uma velocidade de aquisição de 16 cm/min. Depois de as imagens terem sido cegamente avaliadas por três médicos especialistas, concluiu-se que a qualidade de imagem bem como a detetabilidade entre imagens era comparável, corroborando os resultados desta investigação. Conclusão: Com o objetivo de reduzir o tempo de aquisição aplicando um algoritmo de processamento de imagem, foi demonstrado que o protocolo com 16 cm/min de velocidade de aquisição será o limite para o aumento dessa mesma velocidade. Após processar a informação, este protocolo fornece os resultados mais equivalentes àqueles obtidos com o protocolo standard. Tendo em conta que esta técnica foi estabelecida com sucesso na prática clínica, pode-se concluir que, pelo menos em doentes referidos com carcinomas da mama e da próstata, o tempo de aquisição pode ser reduzido para metade, duplicando a velocidade de aquisição de 8 para 16 cm/min.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reconfigurable hardware can be used to build multi tasking systems that dynamically adapt themselves to the requirements of the running applications. This is especially useful in embedded systems, since the available resources are very limited and the reconfigurable hardware can be reused for different applications. In these systems computations are frequently represented as task graphs that are executed taking into account their internal dependencies and the task schedule. The management of the task graph execution is critical for the system performance. In this regard, we have developed two dif erent versions, a software module and a hardware architecture, of a generic task-graph execution manager for reconfigurable multi-tasking systems. The second version reduces the run-time management overheads by almost two orders of magnitude. Hence it is especially suitable for systems with exigent timing constraints. Both versions include specific support to optimize the reconfiguration process.