15 resultados para kernel
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
The thesis reports the synthesis, and the chemical, structural and spectroscopic characterization of a series of new Rhodium and Au-Fe carbonyl clusters. Most new high-nuclearity rhodium carbonyl clusters have been obtained by redox condensation of preformed rhodium clusters reacting with a species in a different oxidation state generated in situ by mild oxidation. In particular the starting Rh carbonyl clusters is represented by the readily available [Rh7(CO)16]3- 9 compound. The oxidized species is generated in situ by reaction of the above with a stoichiometric defect of a mild oxidizing agents such as [M(H2O)x]n+ aquo complexes possessing different pKa’s and Mn+/M potentials. The experimental results are roughly in keeping with the conclusion that aquo complexes featuring E°(Mn+/M) < ca. -0.20 V do not lead to the formation of hetero-metallic Rh clusters, probably because of the inadequacy of their redox potentials relative to that of the [Rh7(CO)16]3-/2- redox couple. Only homometallic cluster s such as have been fairly selectively obtained. As a fallout of the above investigations, also a convenient and reproducible synthesis of the ill-characterized species [HnRh22(CO)35]8-n has been discovered. The ready availability of the above compound triggered both its complete spectroscopic and chemical characterization. because it is the only example of Rhodium carbonyl clusters with two interstitial metal atoms. The presence of several hydride atoms, firstly suggested by chemical evidences, has been implemented by ESI-MS and 1H-NMR, as well as new structural characterization of its tetra- and penta-anion. All these species display redox behaviour and behave as molecular capacitors. Their chemical reactivity with CO gives rise to a new series of Rh22 clusters containing a different number of carbonyl groups, which have been likewise fully characterized. Formation of hetero-metallic Rh clusters was only observed when using SnCl2H2O as oxidizing agent because. Quite all the Rh-Sn carbonyl clusters obtained have icosahedral geometry. The only previously reported example of an icosahedral Rh cluster with an interstitial atom is the [Rh12Sb(CO)27]3- trianion. They have very similar metal framework, as well as the same number of CO ligands and, consequently, cluster valence electrons (CVEs). .A first interesting aspect of the chemistry of the Rh-Sn system is that it also provides icosahedral clusters making exception to the cluster-borane analogy by showing electron counts from 166 to 171. As a result, the most electron-short species, namely [Rh12Sn(CO)25]4- displays redox propensity, even if disfavoured by the relatively high free negative charge of the starting anion and, moreover, behaves as a chloride scavenger. The presence of these bulky interstitial atoms results in the metal framework adopting structures different from a close-packed metal lattice and, above all, imparts a notable stability to the resulting cluster. An organometallic approach to a new kind of molecular ligand-stabilized gold nanoparticles, in which Fe(CO)x (x = 3,4) moieties protect and stabilize the gold kernel has also been undertaken. As a result, the new clusters [Au21{Fe(CO)4}10]5-, [Au22{Fe(CO)4}12]6-, Au28{Fe(CO)3}4{Fe(CO)4}10]8- and [Au34{Fe(CO)3}6{Fe(CO)4}8]6- have been isolated and characterized. As suggested by concepts of isolobal analogies, the Fe(CO)4 molecular fragment may display the same ligand capability of thiolates and go beyond. Indeed, the above clusters bring structural resemblance to the structurally characterized gold thiolates by showing Fe-Au-Fe, rather than S-Au-S, staple motives. Staple motives, the oxidation state of surface gold atoms and the energy of Au atomic orbitals are likely to concur in delaying the insulator-to-metal transition as the nuclearity of gold thiolates increases, relative to the more compact transition-metal carbonyl clusters. Finally, a few previously reported Au-Fe carbonyl clusters have been used as precursors in the preparation of supported gold catalysts. The catalysts obtained are active for toluene oxidation and the catalytic activity depends on the Fe/Au cluster loading over TiO2.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
Fusarium Head Blight (FHB) is a worldwide cereal disease responsible of significant yield reduction, inferior grain quality, and mycotoxin accumulation. Fusarium graminearum and F. culmorum are the prevalent causal agents. FHB has been endemic in Italy since 1995, while there are no records about its presence in Syria. Forty-eight and forty-six wheat kernel samples were collected from different localities and analyzed for fungal presence and mycotoxin contamination. Fusarium strains were identified morphologically but the molecular confirmation was performed only for some species. Further differentiation of the chemotypes for trichothecene synthesis by F. graminearum and F. culmorum strains was conducted by PCR assays. Fusarium spp. were present in 62.5% of Syrian samples. 3Acetyl-Deoxynivalenol and nivalenol chemotypes were found in F. culmorum whilst all F. graminearum strains belonged to NIV chemotype. Italian samples were infected with Fusarium spp for 67.4%. 15Ac-DON was the prevalent chemotype in F. graminearum, while 3Ac-DON chemotype was detected in F. culmorum. The 60 Syrian Fusarium strains tested for mycotoxin production by HPLC-MS/MS have shown the prevalence of zearalenone while the emerging mycotoxins were almost absent. The analysis of the different Syrian and Italian samples of wheat kernels for their mycotoxin content showed that Syrian kernels were mainly contaminated with storage mycotoxins, aflatoxins and ochratoxin whilst Italian grains with mainly Fusarium mycotoxins. The aggressiveness of several Syrian F. culmorum isolates was estimated using three different assays: floret inoculation in growth chamber, ear inoculation in the field and a validated new Petri-dish test. The study of the behaviour of different Syrian wheat cultivars, grown under different conditions, has revealed that Jory is a FHB Syrian tolerant cultivar. This is the first study in Syria on Fusarium spp. associated to FHB, Fusarium mycotoxin producers and grain quality.
Resumo:
In this work we investigate the existence of resonances for two-centers Coulomb systems with arbitrary charges in two and three dimensions, defining them in terms of generalized complex eigenvalues of a non-selfadjoint deformation of the two-center Schrödinger operator. After giving a description of the bifurcation of the classical system for positive energies, we construct the resolvent kernel of the operators and we prove that they can be extended analytically to the second Riemann sheet. The resonances are then defined and studied with numerical methods and perturbation theory.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
The objectives of this PhD research were: i) to evaluate the use of bread making process to increase the content of β-glucans, resistant starch, fructans, dietary fibers and phenolic compounds of kamut khorasan and wheat breads made with flours obtained from kernels at different maturation stage (at milky stage and fully ripe) and ii) to study the impact of whole grains consumption in the human gut. The fermentation and the stages of kernel development or maturation had a great impact on the amount of resistant starch, fructans and β-glucans as well as their interactions resulted highly statistically significant. The amount of fructans was high in kamut bread (2.1g/100g) at the fully ripe stage compared to wheat during industrial fermentation (baker’s yeast). The sourdough increases the content of polyphenols more than industrial fermentation especially in bread made by flour at milky stage. From the analysis of volatile compounds it resulted that the sensors of electronic nose perceived more aromatic compound in kamut products, as well as the SPME-GC-MS, thus we can assume that kamut is more aromatic than wheat, so using it in sourdough process can be a successful approach to improve the bread taste and flavor. The determination of whole grain biormakers such as alkylresorcinols and others using FIE-MS AND GC-tof-MS is a valuable alternative for further metabolic investigations. The decrease of N-acetyl-glucosamine and 3-methyl-hexanedioic acid in kamut faecal samples suggests that kamut can have a role in modulating mucus production/degradation or even gut inflammation. This work gives a new approach to the innovation strategies in bakery functional foods, that can help to choose the right or best combination between stages of kernel maturation-fermentation process and baking temperature.
Resumo:
China is a large country characterized by remarkable growth and distinct regional diversity. Spatial disparity has always been a hot issue since China has been struggling to follow a balanced growth path but still confronting with unprecedented pressures and challenges. To better understand the inequality level benchmarking spatial distributions of Chinese provinces and municipalities and estimate dynamic trajectory of sustainable development in China, I constructed the Composite Index of Regional Development (CIRD) with five sub pillars/dimensions involving Macroeconomic Index (MEI), Science and Innovation Index (SCI), Environmental Sustainability Index (ESI), Human Capital Index (HCI) and Public Facilities Index (PFI), endeavoring to cover various fields of regional socioeconomic development. Ranking reports on the five sub dimensions and aggregated CIRD were provided in order to better measure the developmental degrees of 31 or 30 Chinese provinces and municipalities over 13 years from 1998 to 2010 as the time interval of three “Five-year Plans”. Further empirical applications of this CIRD focused on clustering and convergence estimation, attempting to fill up the gap in quantifying the developmental levels of regional comprehensive socioeconomics and estimating the dynamic convergence trajectory of regional sustainable development in a long run. Four clusters were benchmarked geographically-oriented in the map on the basis of cluster analysis, and club-convergence was observed in the Chinese provinces and municipalities based on stochastic kernel density estimation.
Resumo:
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.