940 resultados para Graph Decomposition
Resumo:
Let G be a semi-simple algebraic group over a field k. Projective G-homogeneous varieties are projective varieties over which G acts transitively. The stabilizer or the isotropy subgroup at a point on such a variety is a parabolic subgroup which is always smooth when the characteristic of k is zero. However, when k has positive characteristic, we encounter projective varieties with transitive G-action where the isotropy subgroup need not be smooth. We call these varieties projective pseudo-homogeneous varieties. To every such variety, we can associate a corresponding projective homogeneous variety. In this thesis, we extensively study the Chow motives (with coefficients from a finite connected ring) of projective pseudo-homogeneous varieties for G inner type over k and compare them to the Chow motives of the corresponding projective homogeneous varieties. This is done by proving a generic criterion for the motive of a variety to be isomorphic to the motive of a projective homogeneous variety which works for any characteristic of k. As a corollary, we give some applications and examples of Chow motives that exhibit an interesting phenomenon. We also show that the motives of projective pseudo-homogeneous varieties satisfy properties such as Rost Nilpotence and Krull-Schmidt.
Resumo:
In this work humic substances (HS) extracted from non-flooded (Araca) and flooded (Iara) soils were characterized through the calculation of stability and activation energies associated with the dehydration and thermal decomposition of HS using TGA and DTA, electronic paramagnetic resonance and C/H, C/N and C/O atomic ratios. For HS extracted from flooded soils, there was evidence for the influence of humidity on the organic matter humification process. Observations of thermal behaviour, with elemental analysis, indicated the presence of fossilized organic carbon within clay particles, which only decomposed above 800 C. This characteristic could explain the different thermal stability and pyrolysis activation energies for Iara HS compared to Araca HS.
Resumo:
Kinematic structure of planar mechanisms addresses the study of attributes determined exclusively by the joining pattern among the links forming a mechanism. The system group classification is central to the kinematic structure and consists of determining a sequence of kinematically and statically independent-simple chains which represent a modular basis for the kinematics and force analysis of the mechanism. This article presents a novel graph-based algorithm for structural analysis of planar mechanisms with closed-loop kinematic structure which determines a sequence of modules (Assur groups) representing the topology of the mechanism. The computational complexity analysis and proof of correctness of the implemented algorithm are provided. A case study is presented to illustrate the results of the devised method.
Resumo:
We study the chaos decomposition of self-intersection local times and their regularization, with a particular view towards Varadhan's renormalization for the planar Edwards model.
Resumo:
Reconfigurable hardware can be used to build a multitasking system where tasks are assigned to HW resources at run-time according to the requirements of the running applications. These tasks are frequently represented as direct acyclic graphs and their execution is typically controlled by an embedded processor that schedules the graph execution. In order to improve the efficiency of the system, the scheduler can apply prefetch and reuse techniques that can greatly reduce the reconfiguration latencies. For an embedded processor all these computations represent a heavy computational load that can significantly reduce the system performance. To overcome this problem we have implemented a HW scheduler using reconfigurable resources. In addition we have implemented both prefetch and replacement techniques that obtain as good results as previous complex SW approaches, while demanding just a few clock cycles to carry out the computations. We consider that the HW cost of the system (in our experiments 3% of a Virtex-II PRO xc2vp30 FPGA) is affordable taking into account the great efficiency of the techniques applied to hide the reconfiguration latency and the negligible run-time penalty introduced by the scheduler computations.
A class of domain decomposition preconditioners for hp-discontinuous Galerkin finite element methods
Resumo:
In this article we address the question of efficiently solving the algebraic linear system of equations arising from the discretization of a symmetric, elliptic boundary value problem using hp-version discontinuous Galerkin finite element methods. In particular, we introduce a class of domain decomposition preconditioners based on the Schwarz framework, and prove bounds on the condition number of the resulting iteration operators. Numerical results confirming the theoretical estimates are also presented.
Resumo:
Reconfigurable hardware can be used to build multi tasking systems that dynamically adapt themselves to the requirements of the running applications. This is especially useful in embedded systems, since the available resources are very limited and the reconfigurable hardware can be reused for different applications. In these systems computations are frequently represented as task graphs that are executed taking into account their internal dependencies and the task schedule. The management of the task graph execution is critical for the system performance. In this regard, we have developed two dif erent versions, a software module and a hardware architecture, of a generic task-graph execution manager for reconfigurable multi-tasking systems. The second version reduces the run-time management overheads by almost two orders of magnitude. Hence it is especially suitable for systems with exigent timing constraints. Both versions include specific support to optimize the reconfiguration process.
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
Part 5: Service Orientation in Collaborative Networks
Resumo:
Nowadays, the development of the photovoltaic (PV) technology is consolidated as a source of renewable energy. The research in the topic of maximum improvement on the energy efficiency of the PV plants is today a major challenge. The main requirement for this purpose is to know the performance of each of the PV modules that integrate the PV field in real time. In this respect, a PLC communications based Smart Monitoring and Communications Module, which is able to monitor at PV level their operating parameters, has been developed at the University of Malaga. With this device you can check if any of the panels is suffering any type of overriding performance, due to a malfunction or partial shadowing of its surface. Since these fluctuations in electricity production from a single panel affect the overall sum of all panels that conform a string, it is necessary to isolate the problem and modify the routes of energy through alternative paths in case of PV panels array configuration.
Resumo:
The thermal decomposition of a solid recovered fuel has been studied using thermogravimetry, in order to get information about the main steps in the decomposition of such material. The study comprises two different atmospheres: inert and oxidative. The kinetics of decomposition is determined at three different heating rates using the same kinetic constants and model for both atmospheres at all the heating rates simultaneously. A good correlation of the TG data is obtained using three nth-order parallel reactions.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Gasarite structures are a unique type of metallic foam containing tubular pores. The original methods for their production limited them to laboratory study despite appealing foam properties. Thermal decomposition processing of gasarites holds the potential to increase the application of gasarite foams in engineering design by removing several barriers to their industrial scale production. The following study characterized thermal decomposition gasarite processing both experimentally and theoretically. It was found that significant variation was inherent to this process therefore several modifications were necessary to produce gasarites using this method. Conventional means to increase porosity and enhance pore morphology were studied. Pore morphology was determined to be more easily replicated if pores were stabilized by alumina additions and powders were dispersed evenly. In order to better characterize processing, high temperature and high ramp rate thermal decomposition data were gathered. It was found that the high ramp rate thermal decomposition behavior of several hydrides was more rapid than hydride kinetics at low ramp rates. This data was then used to estimate the contribution of several pore formation mechanisms to the development of pore structure. It was found that gas-metal eutectic growth can only be a viable pore formation mode if non-equilibrium conditions persist. Bubble capture cannot be a dominant pore growth mode due to high bubble terminal velocities. Direct gas evolution appears to be the most likely pore formation mode due to high gas evolution rate from the decomposing particulate and microstructural pore growth trends. The overall process was evaluated for its economic viability. It was found that thermal decomposition has potential for industrialization, but further refinements are necessary in order for the process to be viable.
Resumo:
Soils are the largest sinks of carbon in terrestrial ecosystems. Soil organic carbon is important for ecosystem balance as it supplies plants with nutrients, maintains soil structure, and helps control the exchange of CO2 with the atmosphere. The processes in which wood carbon is stabilized and destabilized in forest soils is still not understood completely. This study attempts to measure early wood decomposition by different fungal communities (inoculation with pure colonies of brown or white rot, or the original microbial community) under various interacting treatments: wood quality (wood from +CO2, +CO2+O3, or ambient atmosphere Aspen-FACE treatments from Rhinelander, WI), temperature (ambient or warmed), soil texture (loamy or sandy textured soil), and wood location (plot surface or buried 15cm below surface). Control plots with no wood chips added were also monitored throughout the study. By using isotopically-labelled wood chips from the Aspen-FACE experiment, we are able to track wood-derived carbon losses as soil CO2 efflux and as leached dissolved organic carbon (DOC). We analyzed soil water for chemical characteristics such as, total phenolics, SUVA254, humification, and molecular size. Wood chip samples were also analyzed for their proportion of lignin:carbohydrates using FTIR analysis at three time intervals throughout 12 months of decomposition. After two years of measurements, the average total soil CO2 efflux rates were significantly different depending on wood location, temperature, and wood quality. The wood-derived portion soil CO2 efflux also varied significantly by wood location, temperature, and wood quality. The average total DOC and the wood-derived portion of DOC differed between inoculation treatments, wood location, and temperature. Soil water chemical characteristics varied significantly by inoculation treatments, temperature, and wood quality. After 12 months of decomposition the proportion of lignin:carbohydrates varied significantly by inoculation treatment, with white rot having the only average proportional decrease in lignin:carbohydrates. Both soil CO2 efflux and DOC losses indicate that wood location is important. Carbon losses were greater from surface wood chips compared with buried wood chips, implying the importance of buried wood for total ecosystem carbon stabilization. Treatments associated with climate change also had an effect on the level of decomposition. DOC losses, soil water characteristics, and FTIR data demonstrate the importance of fungal community on the degree of decomposition and the resulting byproducts found throughout the soil.
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient’s extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.