856 resultados para Population set-based methods
Resumo:
Presently avocado germplasm is conserved ex situ in the form of field repositories across the globe including Australia. The maintenance of germplasm in the field is costly, labour and land intensive, exposed to natural disasters and always at the risk of abiotic and biotic stresses. The aim of this study was to overcome these problems using cryopreservation to store avocado (Persea americana Mill.) somatic embryos (SE). Two vitrification-based methods of cryopreservation were optimised (cryovial and droplet-vitrification) using four avocado cultivars (‘A10′, ‘Reed’, ‘Velvick’ and ‘Duke-7′). SE of the four cultivars were stored for short-term (one hour) in liquid nitrogen using the cryovial-vitrification method and showed a viability of 91%, 73%, 86% and 80% respectively. While when using the droplet vitrification method viabilities of 100%, 85% and 93% were recorded for ‘A10′, ‘Reed’ and ‘Velvick’. For long-term storage, SE of cultivars ‘A10′, ‘Reed’ and ‘Velvick’ were successfully recovered with viability of 65–100% after 3 months of LN storage. For cultivar ‘Reed’ and ‘Velvick’ SE were recovered after 12 months of LN storage with viability of 67% and 59%, respectively. The outcome of this work contributes towards the establishment of a cryopreservation protocol that is applicable across multiple avocado cultivars.
Resumo:
Hintergrund: Helicobacter pylori (H. pylori) zählt trotz abnehmender Inzidenz zu den häufigsten bakteriellen Infektionskrankheiten des Menschen. Die Infektion mit H. pylori ist ein Risikofaktor für Krankheiten wie gastroduodenale Geschwüre, Magenkarzinomen und MALT (Mucosa Associated Lymphoid Tissue)-Lymphomen. Zur Diagnostik von H. pylori stehen verschiedene invasive und nichtinvasive Verfahren zur Verfügung. Der 13C-Harnstoff-Atemtest wird zur Kontrolle einer Eradikationstherapie empfohlen, kommt in der Primärdiagnostik von H. pylori derzeit jedoch nicht standardmäßig in Deutschland zum Einsatz. Fragestellung: Welchen medizinischen und gesundheitsökonomischen Nutzen hat die Untersuchung auf H. pylori-Besiedlung mittels 13C-Harnstoff-Atemtest in der Primärdiagnostik im Vergleich zu invasiven und nichtinvasiven diagnostischen Verfahren? Methodik: Basierend auf einer systematischen Literaturrecherche in Verbindung mit einer Handsuche werden Studien zur Testgüte und Kosten-Effektivität des 13C-Harnstoff-Atemtests im Vergleich zu anderen diagnostischen Verfahren zum primären Nachweis von H. pylori identifiziert. Es werden nur medizinische Studien eingeschlossen, die den 13C-Harnstoff-Atemtest direkt mit anderen H. pylori-Testverfahren vergleichen. Goldstandard ist eines oder eine Kombination der biopsiebasierten Testverfahren. Für die gesundheitsökonomische Beurteilung werden nur vollständige gesundheitsökonomische Evaluationsstudien einbezogen, bei denen die Kosten-Effektivität des 13C Harnstoff-Atemtests direkt mit anderen H. pylori-Testverfahren verglichen wird. Ergebnisse: Es werden 30 medizinische Studien für den vorliegenden Bericht eingeschlossen. Im Vergleich zum Immunglobulin G (IgG)-Test ist die Sensitivität des 13C-Harnstoff-Atemtests zwölfmal höher, sechsmal niedriger und einmal gleich, und die Spezifität 13-mal höher, dreimal niedriger und zweimal gleich. Im Vergleich zum Stuhl-Antigen-Test ist die Sensitivität des 13C-Harnstoff-Atemtests neunmal höher, dreimal niedriger und einmal gleich, und die Spezifität neunmal höher, zweimal niedriger und zweimal gleich. Im Vergleich zum Urease-Schnelltest sind die Sensitivität des 13C-Harnstoff-Atemtests viermal höher, dreimal niedriger und viermal gleich und die Spezifität fünfmal höher, fünfmal niedriger und einmal gleich. Im Vergleich mit der Histologie ist die Sensitivität des 13C-Harnstoff-Atemtests einmal höher und zweimal niedriger und die Spezifität zweimal höher und einmal niedriger. In je einem Vergleich zeigt sich kein Unterschied zwischen 13C-Harnstoff-Atemtest und 14C-Harnstoff-Atemtest, sowie eine niedrigere Sensitivität und höhere Spezifität im Vergleich zur Polymerase-Kettenreaktion (PCR). Inwieweit die beschriebenen Unterschiede statistisch signifikant sind, wird in sechs der 30 Studien angegeben. Es werden neun gesundheitsökonomische Evaluationen in dem vorliegenden Bericht berücksichtigt. Die Test-and-Treat-Strategie mittels 13C-Harnstoff-Atemtest wird in sechs Studien mit einem Test-and-Treat-Verfahren auf Basis der Serologie sowie in drei Studien mit einem Test-and-Treat-Verfahren auf Basis des Stuhl-Antigen-Tests verglichen. Dabei ist das Atemtestverfahren dreimal kosteneffektiv gegenüber der serologischen Methode und wird von der Stuhl-Antigen-Test-Strategie einmal dominiert. Vier Studien beinhalten einen Vergleich der Test-and -Treat-Strategie auf Basis des 13C-Harnstoff-Atemtests mit einer empirischen antisekretorischen Therapie, wobei sich das Atemtesverfahren zweimal als kosteneffektive Prozedur erweist und zwei Studien einen Vergleich mit einer empirischen Eradikationstherapie. In fünf Studien wird das Test-and-Treat-Verfahren mittels 13C-Harnstoff-Atemtest einer endoskopiebasierten Strategie gegenübergestellt. Zweimal dominiert die Atemteststrategie die endoskopische Prozedur und einmal wird sie von dieser Strategie dominiert. Diskussion:Sowohl die medizinischen als auch die ökonomischen Studien weisen mehr oder minder gravierende Mängel auf und liefern heterogene Ergebnisse. So werden in der Mehrzahl der medizinischen Studien keine Angaben zur statistischen Signifikanz der berichteten Unterschiede zwischen den jeweiligen Testverfahren gemacht. Im direkten Vergleich weist der 13C-Harnstoff-Atemtest überwiegend eine höhere Testgüte als der IgG und der Stuhl-Antigen-Test auf. Aus den Vergleichen mit dem Urease-Schnelltest lassen sich keine Tendenzen bezüglich der Sensitivität ableiten, wohingegen die Spezifität des 13C-Harnstoff-Atemtests höher einzuschätzen ist. Für die Vergleiche des 13C-Harnstoff-Atemtest mit der Histologie, dem 14C-Harnstoff-Atemtest und der PCR liegen zu wenige Ergebnisse vor. In der eingeschlossenen ökonomischen Literatur deuten einige Studienergebnisse auf eine Kosten-Effektivität der Test-and-Treat-Strategie mittels 13C-Harnstoff-Atemtest gegenüber dem Test-and-Treat-Verfahren auf Basis der Serologie und der empirischen antiskretorischen Therapie hin. Um Tendenzen bezüglich der Kosten-Effektivität der Atemteststrategie gegenüber der Test-and-Treat-Strategie mittels Stuhl-Antigen-Test sowie der empirischen Eradikationstherapie abzuleiten, mangelt es an validen Ergebnissen bzw. ökonomischer Evidenz. Die Untersuchungsresultate hinsichtlich eines Vergleichs mit endoskopiebasierten Verfahren fallen diesbezüglich zu heterogen aus. Insgesamt kann keines der ökonomischen Modelle der Komplexität des Managements von Patienten mit dyspeptischen Beschwerden gänzlich gerecht werden. Schlussfolgerungen/Empfehlungen: Zusammenfassend ist festzuhalten, dass die Studienlage zur medizinischen und ökonomischen Beurteilung des 13C-Harnstoff-Atemtests im Vergleich zu anderen diagnostischen Methoden nicht ausreichend ist, um den Atemtest als primärdiagnostisches Standardverfahren im Rahmen einer Test-and-Treat-Strategie beim Management von Patienten mit dyspeptischen Beschwerden für die deutsche Versorgungslandschaft insbesondere vor dem Hintergrund der Leitlinien der Deutschen Gesellschaft für Verdauungs- und Stoffwechselkrankheiten (DGVS) anstelle einer endoskopiebasierten Methode zu empfehlen.
Resumo:
Forty-four species of Colletotrichum are confirmed as present in Australia based on DNA sequencing analyses. Many of these species were identified directly as a result of two workshops organised by the Subcommittee on Plant Health Diagnostics in Australia in 2015 that covered morphological and molecular approaches to identification of Colletotrichum. There are several other species of Colletotrichum reported from Australia that remain to be substantiated by DNA sequence-based methods. This body of work aims to provide a basis from which to critically examine a number of isolates of Colletotrichum deposited in Australian culture collections.
Resumo:
Leishmania donovani is the known causative agent of both cutaneous (CL) and visceral leishmaniasis in Sri Lanka. CL is considered to be under-reported partly due to relatively poor sensitivity and specificity of microscopic diagnosis. We compared robustness of three previously described polymerase chain reaction (PCR) based methods to detect Leishmania DNA in 38 punch biopsy samples from patients presented with suspected lesions in 2010. Both, Leishmania genus-specific JW11/JW12 KDNA and LITSR/L5.8S internal transcribed spacer (ITS)1 PCR assays detected 92% (35/38) of the samples whereas a KDNA assay specific for L. donovani (LdF/LdR) detected only 71% (27/38) of samples. All positive samples showed a L. donovani banding pattern upon HaeIII ITS1 PCR-restriction fragment length polymorphism analysis. PCR assay specificity was evaluated in samples containing Mycobacterium tuberculosis , Mycobacterium leprae , and human DNA, and there was no cross-amplification in JW11/JW12 and LITSR/L5.8S PCR assays. The LdF/LdR PCR assay did not amplify M. leprae or human DNA although 500 bp and 700 bp bands were observed in M. tuberculosis samples. In conclusion, it was successfully shown in this study that it is possible to diagnose Sri Lankan CL with high accuracy, to genus and species identification, using Leishmania DNA PCR assays.
In Situ Characterization of Optical Absorption by Carbonaceous Aerosols: Calibration and Measurement
Resumo:
Light absorption by aerosols has a great impact on climate change. A Photoacoustic spectrometer (PA) coupled with aerosol-based classification techniques represents an in situ method that can quantify the light absorption by aerosols in a real time, yet significant differences have been reported using this method versus filter based methods or the so-called difference method based upon light extinction and light scattering measurements. This dissertation focuses on developing calibration techniques for instruments used in measuring the light absorption cross section, including both particle diameter measurements by the differential mobility analyzer (DMA) and light absorption measurements by PA. Appropriate reference materials were explored for the calibration/validation of both measurements. The light absorption of carbonaceous aerosols was also investigated to provide fundamental understanding to the absorption mechanism. The first topic of interest in this dissertation is the development of calibration nanoparticles. In this study, bionanoparticles were confirmed to be a promising reference material for particle diameter as well as ion-mobility. Experimentally, bionanoparticles demonstrated outstanding homogeneity in mobility compared to currently used calibration particles. A numerical method was developed to calculate the true distribution and to explain the broadening of measured distribution. The high stability of bionanoparticles was also confirmed. For PA measurement, three aerosol with spherical or near spherical shapes were investigated as possible candidates for a reference standard: C60, copper and silver. Comparisons were made between experimental photoacoustic absorption data with Mie theory calculations. This resulted in the identification of C60 particles with a mobility diameter of 150 nm to 400 nm as an absorbing standard at wavelengths of 405 nm and 660 nm. Copper particles with a mobility diameter of 80 nm to 300 nm are also shown to be a promising reference candidate at wavelength of 405 nm. The second topic of this dissertation focuses on the investigation of light absorption by carbonaceous particles using PA. Optical absorption spectra of size and mass selected laboratory generated aerosols consisting of black carbon (BC), BC with non-absorbing coating (ammonium sulfate and sodium chloride) and BC with a weakly absorbing coating (brown carbon derived from humic acid) were measured across the visible to near-IR (500 nm to 840 nm). The manner in which BC mixed with each coating material was investigated. The absorption enhancement of BC was determined to be wavelength dependent. Optical absorption spectra were also taken for size and mass selected smoldering smoke produced from six types of commonly seen wood in a laboratory scale apparatus.
Resumo:
Part 1: Introduction
Resumo:
Very high resolution remotely sensed images are an important tool for monitoring fragmented agricultural landscapes, which allows farmers and policy makers to make better decisions regarding management practices. An object-based methodology is proposed for automatic generation of thematic maps of the available classes in the scene, which combines edge-based and superpixel processing for small agricultural parcels. The methodology employs superpixels instead of pixels as minimal processing units, and provides a link between them and meaningful objects (obtained by the edge-based method) in order to facilitate the analysis of parcels. Performance analysis on a scene dominated by agricultural small parcels indicates that the combination of both superpixel and edge-based methods achieves a classification accuracy slightly better than when those methods are performed separately and comparable to the accuracy of traditional object-based analysis, with automatic approach.
Resumo:
Frente a una nueva postura no solo dentro del sistema penal ecuatoriano sino en la mayoría de las legislaciones latinoamericanas, con orígenes europeos y norteamericanos, se encuentra una política criminal de agilidad, eficiencia, negociación, eficacia y rapidez, tendiente a solucionar los conflictos penales que a diario se ventilan mediante procedimientos especiales, distintos al procedimiento tradicional llamado Procedimiento Ordinario. Es por ello que el presente trabajo busca analizar y establecer en base al Código Orgánico Integral Penal los procedimientos especiales, particularizando nuestro estudio en el Procedimiento Abreviado, en relación a su normativa, aplicación, efectividad, haciendo un análisis conciso sobre sus antecedentes, naturaleza y sustanciación, sosteniendo en base a principios constitucionales la correcta y adecuada aplicación de éste novedoso procedimiento. Para tal propósito, es necesario dentro del Capítulo I tratar el Proceso Penal y su reseña histórica en el Ecuador seguida por un análisis de los principios constitucionales, para luego, en el Capítulo II hacer referencia a los sujetos procesales que intervienen en el procedimiento penal; el Capítulo lll trata sobre los procedimientos especiales, finalizando en el Capítulo IV con el estudio del Procedimiento Abreviado como tal.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Recent developments in interactive technologies have seen major changes in the manner in which artists, performers, and creative individuals interact with digital music technology; this is due to the increasing variety of interactive technologies that are readily available today. Digital Musical Instruments (DMIs) present musicians with performance challenges that are unique to this form of computer music. One of the most significant deviations from conventional acoustic musical instruments is the level of physical feedback conveyed by the instrument to the user. Currently, new interfaces for musical expression are not designed to be as physically communicative as acoustic instruments. Specifically, DMIs are often void of haptic feedback and therefore lack the ability to impart important performance information to the user. Moreover, there currently is no standardised way to measure the effect of this lack of physical feedback. Best practice would expect that there should be a set of methods to effectively, repeatedly, and quantifiably evaluate the functionality, usability, and user experience of DMIs. Earlier theoretical and technological applications of haptics have tried to address device performance issues associated with the lack of feedback in DMI designs and it has been argued that the level of haptic feedback presented to a user can significantly affect the user’s overall emotive feeling towards a musical device. The outcome of the investigations contained within this thesis are intended to inform new haptic interface.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
Research has demonstrated that mining activities can cause serious impacts on the environment, as well as the surrounding communities, mainly due to the unsafe storage of mine tailings. This research focuses on the sustainability assessment of new technologies for the recovery of metals from mine residues. The assessment consists in the evaluation of the environmental, economic, and social impacts through the Life Cycle based methods: Life Cycle Assessment (LCA), Life Cycle Costing (LCC), and Social Life Cycle Assessment (SLCA). The analyses are performed on the Mondo Minerals bioleaching project, which aim is to recover nickel and cobalt from the Sotkamo and Vuonos mine tailings. The LCA demonstrates that the project contributes to the avoided production of nickel and cobalt concentrates from new resources, hence reducing several environmental impacts. The LCC analysis shows that the company’s main costs are linked to the bioleaching process, caused by electricity consumption and the chemicals used. The SLCA analyses the impacts on three main stakeholder categories: workers, local community, and society. The results demonstrated that a fair salary (or the absence of it) impacts the workers the most, while the local community stakeholder category impacts are related to the access to material resources. The health and safety category is the most impacted category for the society stakeholder. The environmental and economic analyses demonstrate that the recovery of mine tailings may represents a good opportunity for mine companies both to reduce the environmental impacts linked to mine tailings and to increase the profitability. In particular, the project helps reduce the amounts of metals extracted from new resources and demonstrates that the use of the bioleaching technology for the extraction of metals can be economically profitable.
Resumo:
In the last decade, manufacturing companies have been facing two significant challenges. First, digitalization imposes adopting Industry 4.0 technologies and allows creating smart, connected, self-aware, and self-predictive factories. Second, the attention on sustainability imposes to evaluate and reduce the impact of the implemented solutions from economic and social points of view. In manufacturing companies, the maintenance of physical assets assumes a critical role. Increasing the reliability and the availability of production systems leads to the minimization of systems’ downtimes; In addition, the proper system functioning avoids production wastes and potentially catastrophic accidents. Digitalization and new ICT technologies have assumed a relevant role in maintenance strategies. They allow assessing the health condition of machinery at any point in time. Moreover, they allow predicting the future behavior of machinery so that maintenance interventions can be planned, and the useful life of components can be exploited until the time instant before their fault. This dissertation provides insights on Predictive Maintenance goals and tools in Industry 4.0 and proposes a novel data acquisition, processing, sharing, and storage framework that addresses typical issues machine producers and users encounter. The research elaborates on two research questions that narrow down the potential approaches to data acquisition, processing, and analysis for fault diagnostics in evolving environments. The research activity is developed according to a research framework, where the research questions are addressed by research levers that are explored according to research topics. Each topic requires a specific set of methods and approaches; however, the overarching methodological approach presented in this dissertation includes three fundamental aspects: the maximization of the quality level of input data, the use of Machine Learning methods for data analysis, and the use of case studies deriving from both controlled environments (laboratory) and real-world instances.
Resumo:
Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.
Resumo:
The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.