884 resultados para Dunkl Kernel
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
The thesis reports the synthesis, and the chemical, structural and spectroscopic characterization of a series of new Rhodium and Au-Fe carbonyl clusters. Most new high-nuclearity rhodium carbonyl clusters have been obtained by redox condensation of preformed rhodium clusters reacting with a species in a different oxidation state generated in situ by mild oxidation. In particular the starting Rh carbonyl clusters is represented by the readily available [Rh7(CO)16]3- 9 compound. The oxidized species is generated in situ by reaction of the above with a stoichiometric defect of a mild oxidizing agents such as [M(H2O)x]n+ aquo complexes possessing different pKa’s and Mn+/M potentials. The experimental results are roughly in keeping with the conclusion that aquo complexes featuring E°(Mn+/M) < ca. -0.20 V do not lead to the formation of hetero-metallic Rh clusters, probably because of the inadequacy of their redox potentials relative to that of the [Rh7(CO)16]3-/2- redox couple. Only homometallic cluster s such as have been fairly selectively obtained. As a fallout of the above investigations, also a convenient and reproducible synthesis of the ill-characterized species [HnRh22(CO)35]8-n has been discovered. The ready availability of the above compound triggered both its complete spectroscopic and chemical characterization. because it is the only example of Rhodium carbonyl clusters with two interstitial metal atoms. The presence of several hydride atoms, firstly suggested by chemical evidences, has been implemented by ESI-MS and 1H-NMR, as well as new structural characterization of its tetra- and penta-anion. All these species display redox behaviour and behave as molecular capacitors. Their chemical reactivity with CO gives rise to a new series of Rh22 clusters containing a different number of carbonyl groups, which have been likewise fully characterized. Formation of hetero-metallic Rh clusters was only observed when using SnCl2H2O as oxidizing agent because. Quite all the Rh-Sn carbonyl clusters obtained have icosahedral geometry. The only previously reported example of an icosahedral Rh cluster with an interstitial atom is the [Rh12Sb(CO)27]3- trianion. They have very similar metal framework, as well as the same number of CO ligands and, consequently, cluster valence electrons (CVEs). .A first interesting aspect of the chemistry of the Rh-Sn system is that it also provides icosahedral clusters making exception to the cluster-borane analogy by showing electron counts from 166 to 171. As a result, the most electron-short species, namely [Rh12Sn(CO)25]4- displays redox propensity, even if disfavoured by the relatively high free negative charge of the starting anion and, moreover, behaves as a chloride scavenger. The presence of these bulky interstitial atoms results in the metal framework adopting structures different from a close-packed metal lattice and, above all, imparts a notable stability to the resulting cluster. An organometallic approach to a new kind of molecular ligand-stabilized gold nanoparticles, in which Fe(CO)x (x = 3,4) moieties protect and stabilize the gold kernel has also been undertaken. As a result, the new clusters [Au21{Fe(CO)4}10]5-, [Au22{Fe(CO)4}12]6-, Au28{Fe(CO)3}4{Fe(CO)4}10]8- and [Au34{Fe(CO)3}6{Fe(CO)4}8]6- have been isolated and characterized. As suggested by concepts of isolobal analogies, the Fe(CO)4 molecular fragment may display the same ligand capability of thiolates and go beyond. Indeed, the above clusters bring structural resemblance to the structurally characterized gold thiolates by showing Fe-Au-Fe, rather than S-Au-S, staple motives. Staple motives, the oxidation state of surface gold atoms and the energy of Au atomic orbitals are likely to concur in delaying the insulator-to-metal transition as the nuclearity of gold thiolates increases, relative to the more compact transition-metal carbonyl clusters. Finally, a few previously reported Au-Fe carbonyl clusters have been used as precursors in the preparation of supported gold catalysts. The catalysts obtained are active for toluene oxidation and the catalytic activity depends on the Fe/Au cluster loading over TiO2.
Resumo:
In der vorliegenden Dissertation werden zwei verschiedene Aspekte des Sektors ungerader innerer Parität der mesonischen chiralen Störungstheorie (mesonische ChPT) untersucht. Als erstes wird die Ein-Schleifen-Renormierung des führenden Terms, der sog. Wess-Zumino-Witten-Wirkung, durchgeführt. Dazu muß zunächst der gesamte Ein-Schleifen-Anteil der Theorie mittels Sattelpunkt-Methode extrahiert werden. Im Anschluß isoliert man alle singulären Ein-Schleifen-Strukturen im Rahmen der Heat-Kernel-Technik. Zu guter Letzt müssen diese divergenten Anteile absorbiert werden. Dazu benötigt man eine allgemeinste anomale Lagrange-Dichte der Ordnung O(p^6), welche systematisch entwickelt wird. Erweitert man die chirale Gruppe SU(n)_L x SU(n)_R auf SU(n)_L x SU(n)_R x U(1)_V, so kommen zusätzliche Monome ins Spiel. Die renormierten Koeffizienten dieser Lagrange-Dichte, die Niederenergiekonstanten (LECs), sind zunächst freie Parameter der Theorie, die individuell fixiert werden müssen. Unter Betrachtung eines komplementären vektormesonischen Modells können die Amplituden geeigneter Prozesse bestimmt und durch Vergleich mit den Ergebnissen der mesonischen ChPT eine numerische Abschätzung einiger LECs vorgenommen werden. Im zweiten Teil wird eine konsistente Ein-Schleifen-Rechnung für den anomalen Prozeß (virtuelles) Photon + geladenes Kaon -> geladenes Kaon + neutrales Pion durchgeführt. Zur Kontrolle unserer Resultate wird eine bereits vorhandene Rechnung zur Reaktion (virtuelles) Photon + geladenes Pion -> geladenes Pion + neutrales Pion reproduziert. Unter Einbeziehung der abgeschätzten Werte der jeweiligen LECs können die zugehörigen hadronischen Strukturfunktionen numerisch bestimmt und diskutiert werden.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
The technology of partial virtualization is a revolutionary approach to the world of virtualization. It lies directly in-between full system virtual machines (like QEMU or XEN) and application-related virtual machines (like the JVM or the CLR). The ViewOS project is the flagship of such technique, developed by the Virtual Square laboratory, created to provide an abstract view of the underlying system resources on a per-process basis and work against the principle of the Global View Assumption. Virtual Square provides several different methods to achieve partial virtualization within the ViewOS system, both at user and kernel levels. Each of these approaches have their own advantages and shortcomings. This paper provides an analysis of the different virtualization methods and problems related to both the generic and partial virtualization worlds. This paper is the result of an in-depth study and research for a new technology to be employed to provide partial virtualization based on ELF dynamic binaries. It starts with a mild analysis of currently available virtualization alternatives and then goes on describing the ViewOS system, highlighting its current shortcomings. The vloader project is then proposed as a possible solution to some of these inconveniences with a working proof of concept and examples to outline the potential of such new virtualization technique. By injecting specific code and libraries in the middle of the binary loading mechanism provided by the ELF standard, the vloader project can promote a streamlined and simplified approach to trace system calls. With the advantages outlined in the following paper, this method presents better performance and portability compared to the currently available ViewOS implementations. Furthermore, some of itsdisadvantages are also discussed, along with their possible solutions.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
Fusarium Head Blight (FHB) is a worldwide cereal disease responsible of significant yield reduction, inferior grain quality, and mycotoxin accumulation. Fusarium graminearum and F. culmorum are the prevalent causal agents. FHB has been endemic in Italy since 1995, while there are no records about its presence in Syria. Forty-eight and forty-six wheat kernel samples were collected from different localities and analyzed for fungal presence and mycotoxin contamination. Fusarium strains were identified morphologically but the molecular confirmation was performed only for some species. Further differentiation of the chemotypes for trichothecene synthesis by F. graminearum and F. culmorum strains was conducted by PCR assays. Fusarium spp. were present in 62.5% of Syrian samples. 3Acetyl-Deoxynivalenol and nivalenol chemotypes were found in F. culmorum whilst all F. graminearum strains belonged to NIV chemotype. Italian samples were infected with Fusarium spp for 67.4%. 15Ac-DON was the prevalent chemotype in F. graminearum, while 3Ac-DON chemotype was detected in F. culmorum. The 60 Syrian Fusarium strains tested for mycotoxin production by HPLC-MS/MS have shown the prevalence of zearalenone while the emerging mycotoxins were almost absent. The analysis of the different Syrian and Italian samples of wheat kernels for their mycotoxin content showed that Syrian kernels were mainly contaminated with storage mycotoxins, aflatoxins and ochratoxin whilst Italian grains with mainly Fusarium mycotoxins. The aggressiveness of several Syrian F. culmorum isolates was estimated using three different assays: floret inoculation in growth chamber, ear inoculation in the field and a validated new Petri-dish test. The study of the behaviour of different Syrian wheat cultivars, grown under different conditions, has revealed that Jory is a FHB Syrian tolerant cultivar. This is the first study in Syria on Fusarium spp. associated to FHB, Fusarium mycotoxin producers and grain quality.
Resumo:
Präsentiert wird ein vollständiger, exakter und effizienter Algorithmus zur Berechnung des Nachbarschaftsgraphen eines Arrangements von Quadriken (Algebraische Flächen vom Grad 2). Dies ist ein wichtiger Schritt auf dem Weg zur Berechnung des vollen 3D Arrangements. Dabei greifen wir auf eine bereits existierende Implementierung zur Berechnung der exakten Parametrisierung der Schnittkurve von zwei Quadriken zurück. Somit ist es möglich, die exakten Parameterwerte der Schnittpunkte zu bestimmen, diese entlang der Kurven zu sortieren und den Nachbarschaftsgraphen zu berechnen. Wir bezeichnen unsere Implementierung als vollständig, da sie auch die Behandlung aller Sonderfälle wie singulärer oder tangentialer Schnittpunkte einschließt. Sie ist exakt, da immer das mathematisch korrekte Ergebnis berechnet wird. Und schließlich bezeichnen wir unsere Implementierung als effizient, da sie im Vergleich mit dem einzigen bisher implementierten Ansatz gut abschneidet. Implementiert wurde unser Ansatz im Rahmen des Projektes EXACUS. Das zentrale Ziel von EXACUS ist es, einen Prototypen eines zuverlässigen und leistungsfähigen CAD Geometriekerns zu entwickeln. Obwohl wir das Design unserer Bibliothek als prototypisch bezeichnen, legen wir dennoch größten Wert auf Vollständigkeit, Exaktheit, Effizienz, Dokumentation und Wiederverwendbarkeit. Über den eigentlich Beitrag zu EXACUS hinaus, hatte der hier vorgestellte Ansatz durch seine besonderen Anforderungen auch wesentlichen Einfluss auf grundlegende Teile von EXACUS. Im Besonderen hat diese Arbeit zur generischen Unterstützung der Zahlentypen und der Verwendung modularer Methoden innerhalb von EXACUS beigetragen. Im Rahmen der derzeitigen Integration von EXACUS in CGAL wurden diese Teile bereits erfolgreich in ausgereifte CGAL Pakete weiterentwickelt.
Resumo:
In questa tesi abbiamo studiato la quantizzazione di una teoria di gauge di forme differenziali su spazi complessi dotati di una metrica di Kaehler. La particolarità di queste teorie risiede nel fatto che esse presentano invarianze di gauge riducibili, in altre parole non indipendenti tra loro. L'invarianza sotto trasformazioni di gauge rappresenta uno dei pilastri della moderna comprensione del mondo fisico. La caratteristica principale di tali teorie è che non tutte le variabili sono effettivamente presenti nella dinamica e alcune risultano essere ausiliarie. Il motivo per cui si preferisce adottare questo punto di vista è spesso il fatto che tali teorie risultano essere manifestamente covarianti sotto importanti gruppi di simmetria come il gruppo di Lorentz. Uno dei metodi più usati nella quantizzazione delle teorie di campo con simmetrie di gauge, richiede l'introduzione di campi non fisici detti ghosts e di una simmetria globale e fermionica che sostituisce l'iniziale invarianza locale di gauge, la simmetria BRST. Nella presente tesi abbiamo scelto di utilizzare uno dei più moderni formalismi per il trattamento delle teorie di gauge: il formalismo BRST Lagrangiano di Batalin-Vilkovisky. Questo metodo prevede l'introduzione di ghosts per ogni grado di riducibilità delle trasformazioni di gauge e di opportuni “antifields" associati a ogni campo precedentemente introdotto. Questo formalismo ci ha permesso di arrivare direttamente a una completa formulazione in termini di path integral della teoria quantistica delle (p,0)-forme. In particolare esso permette di dedurre correttamente la struttura dei ghost della teoria e la simmetria BRST associata. Per ottenere questa struttura è richiesta necessariamente una procedura di gauge fixing per eliminare completamente l'invarianza sotto trasformazioni di gauge. Tale procedura prevede l'eliminazione degli antifields in favore dei campi originali e dei ghosts e permette di implementare, direttamente nel path integral condizioni di gauge fixing covarianti necessari per definire correttamente i propagatori della teoria. Nell'ultima parte abbiamo presentato un’espansione dell’azione efficace (euclidea) che permette di studiare le divergenze della teoria. In particolare abbiamo calcolato i primi coefficienti di tale espansione (coefficienti di Seeley-DeWitt) tramite la tecnica dell'heat kernel. Questo calcolo ha tenuto conto dell'eventuale accoppiamento a una metrica di background cosi come di un possibile ulteriore accoppiamento alla traccia della connessione associata alla metrica.
Resumo:
In this work we investigate the existence of resonances for two-centers Coulomb systems with arbitrary charges in two and three dimensions, defining them in terms of generalized complex eigenvalues of a non-selfadjoint deformation of the two-center Schrödinger operator. After giving a description of the bifurcation of the classical system for positive energies, we construct the resolvent kernel of the operators and we prove that they can be extended analytically to the second Riemann sheet. The resonances are then defined and studied with numerical methods and perturbation theory.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Il lavoro svolto in questa tesi consiste nell'effettuare il porting del Monitor di rete da Linux ad Android,facente parte di un sistema più complesso conosciuto come ABPS. Il ruolo del monitor è quello di configurare dinamicamente tutte le interfacce di rete disponibili sul dispositivo sul quale lavora,in modo da essere connessi sempre alla miglior rete conosciuta,ad esempio al miglior Access Point nel caso del interfaccia wireless.
Resumo:
The objectives of this PhD research were: i) to evaluate the use of bread making process to increase the content of β-glucans, resistant starch, fructans, dietary fibers and phenolic compounds of kamut khorasan and wheat breads made with flours obtained from kernels at different maturation stage (at milky stage and fully ripe) and ii) to study the impact of whole grains consumption in the human gut. The fermentation and the stages of kernel development or maturation had a great impact on the amount of resistant starch, fructans and β-glucans as well as their interactions resulted highly statistically significant. The amount of fructans was high in kamut bread (2.1g/100g) at the fully ripe stage compared to wheat during industrial fermentation (baker’s yeast). The sourdough increases the content of polyphenols more than industrial fermentation especially in bread made by flour at milky stage. From the analysis of volatile compounds it resulted that the sensors of electronic nose perceived more aromatic compound in kamut products, as well as the SPME-GC-MS, thus we can assume that kamut is more aromatic than wheat, so using it in sourdough process can be a successful approach to improve the bread taste and flavor. The determination of whole grain biormakers such as alkylresorcinols and others using FIE-MS AND GC-tof-MS is a valuable alternative for further metabolic investigations. The decrease of N-acetyl-glucosamine and 3-methyl-hexanedioic acid in kamut faecal samples suggests that kamut can have a role in modulating mucus production/degradation or even gut inflammation. This work gives a new approach to the innovation strategies in bakery functional foods, that can help to choose the right or best combination between stages of kernel maturation-fermentation process and baking temperature.
Resumo:
China is a large country characterized by remarkable growth and distinct regional diversity. Spatial disparity has always been a hot issue since China has been struggling to follow a balanced growth path but still confronting with unprecedented pressures and challenges. To better understand the inequality level benchmarking spatial distributions of Chinese provinces and municipalities and estimate dynamic trajectory of sustainable development in China, I constructed the Composite Index of Regional Development (CIRD) with five sub pillars/dimensions involving Macroeconomic Index (MEI), Science and Innovation Index (SCI), Environmental Sustainability Index (ESI), Human Capital Index (HCI) and Public Facilities Index (PFI), endeavoring to cover various fields of regional socioeconomic development. Ranking reports on the five sub dimensions and aggregated CIRD were provided in order to better measure the developmental degrees of 31 or 30 Chinese provinces and municipalities over 13 years from 1998 to 2010 as the time interval of three “Five-year Plans”. Further empirical applications of this CIRD focused on clustering and convergence estimation, attempting to fill up the gap in quantifying the developmental levels of regional comprehensive socioeconomics and estimating the dynamic convergence trajectory of regional sustainable development in a long run. Four clusters were benchmarked geographically-oriented in the map on the basis of cluster analysis, and club-convergence was observed in the Chinese provinces and municipalities based on stochastic kernel density estimation.
Resumo:
La perdita di pacchetti durante una trasmissione su una rete Wireless influisce in maniera fondamentale sulla qualità del collegamento tra due End-System. Lo scopo del progetto è quello di implementare una tecnica di ritrasmissione asimmetrica anticipata dei pacchetti perduti, in modo da minimizzare i tempi di recupero dati e migliorare la qualità della comunicazione. Partendo da uno studio su determinati tipi di ritrasmissione, in particolare quelli implementati dal progetto ABPS, Always Best Packet Switching, si è maturata l'idea che un tipo di ritrasmissione particolarmente utile potrebbe avvenire a livello Access Point: nel caso in cui la perdita di pacchetti avvenga tra l'AP e il nodo mobile che vi è collegato via IEEE802.11, invece che attendere la ritrasmissione TCP e Effettuata dall'End-System sorgente è lo stesso Access Point che e effettua una ritrasmissione verso il nodo mobile per permettere un veloce recupero dei dati perduti. Tale funzionalità stata quindi concettualmente divisa in due parti, la prima si riferisce all'applicazione che si occupa della bufferizzazione di pacchetti che attraversano l'AP e della loro copia in memoria per poi ritrasmetterli in caso di segnalazione di mancata acquisizione, la seconda riguardante la modifica al kernel che permette la segnalazione anticipata dell'errore. E' già stata sviluppata un'applicazione che prevede una ritrasmissione anticipata da parte dell'Access Point Wifi, cioè una ritrasmissione prima che la notifica di avvenuta perdita raggiunga l'end-point sorgente e appoggiata su un meccanismo di simulazione di Error Detection. Inoltre è stata anche realizzata la ritrasmissione asincrona e anticipata del TCP. Questo documento tratta della realizzazione di una nuova applicazione che fornisca una più effciente versione del buffer di pacchetti e utilizzi il meccanismo di una ritrasmissione asimmetrica e anticipata del TCP, cioè attivare la ritrasmissione su richiesta del TCP tramite notifiche di validità del campo Acknowledgement.