15 resultados para Domain-specific analysis

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis investigates how individuals can develop, exercise, and maintain autonomy and freedom in the presence of information technology. It is particularly interested in how information technology can impose autonomy constraints. The first part identifies a problem with current autonomy discourse: There is no agreed upon object of reference when bemoaning loss of or risk to an individual’s autonomy. Here, thesis introduces a pragmatic conceptual framework to classify autonomy constraints. In essence, the proposed framework divides autonomy in three categories: intrinsic autonomy, relational autonomy and informational autonomy. The second part of the thesis investigates the role of information technology in enabling and facilitating autonomy constraints. The analysis identifies eleven characteristics of information technology, as it is embedded in society, so-called vectors of influence, that constitute risk to an individual’s autonomy in a substantial way. These vectors are assigned to three sets that correspond to the general sphere of the information transfer process to which they can be attributed to, namely domain-specific vectors, agent-specific vectors and information recipient-specific vectors. The third part of the thesis investigates selected ethical and legal implications of autonomy constraints imposed by information technology. It shows the utility of the theoretical frameworks introduced earlier in the thesis when conducting an ethical analysis of autonomy-constraining technology. It also traces the concept of autonomy in the European Data Lawsand investigates the impact of cultural embeddings of individuals on efforts to safeguard autonomy, showing intercultural flashpoints of autonomy differences. In view of this, the thesis approaches the exercise and constraint of autonomy in presence of information technology systems holistically. It contributes to establish a common understanding of (intuitive) terminology and concepts, connects this to current phenomena arising out of ever-increasing interconnectivity and computational power and helps operationalize the protection of autonomy through application of the proposed frameworks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la civilización de la antigua Roma, tres de los aspectos más importantes de la vida cotidiana estaban vinculados a la arquitectura: las termas, los acueductos y las tumbas. Esta investigación propone el estudio de la integración de sistemas avanzados para la documentación, gestión y valorización del patrimonio arquitectónico funerario de la Vie Latina y la Appia Antica, en estrecha relación con el tema del Paisaje Cultural. De hecho, el Parque de las Tumbas Latinas alberga uno de los complejos funerarios más importantes, que en la actualidad conserva el aspecto tradicional del antiguo paisaje romano. A lo largo de una vía empedrada, como todas las vías consulares, la Vía Latina (al igual que la Vía Appia Antica), que, como recordaba Tito Livio, conectaba en su día las ciudades de Roma con Capua, sigue manteniendo "congelado" el antiguo trazado urbano/paisajístico. El sistema multiforme del Ager Romanus y del sitio cultural Via Latina/Appia Antica estudiado en esta investigación es, por tanto, comparable a una estructura viva y dinámica y, como tal, debe ser analizada. Por lo tanto, para diseñar una herramienta de protección y gestión tan "potente" y adecuada para un sitio histórico de enorme importancia, fue necesario utilizar las técnicas de levantamiento arquitectónico más avanzadas que se utilizan actualmente (como el escaneo láser y la fotogrametría, junto con un software de análisis específico), acompañadas de un estudio en profundidad de las técnicas de construcción antiguas. El último aspecto clave que pretende abordar la investigación es la catalogación. De hecho, los sitios y monumentos históricos no se pueden mantener sólo mediante su uso y utilización pasiva, sino activando todas las operaciones de protección y conservación mediante intervenciones directas (mantenimiento/restauración) e indirectas, como la catalogación constante de las obras históricas y la consiguiente "catalogación dinámica".

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’applicazione del principio di buona fede oggettiva all’attività della pubblica amministrazione, sia contrattuale che provvedimentale, è attualmente fuori discussione, tuttavia i limiti e le modalità attraverso cui questo principio si articola sono ancora oggetto di dibattito dottrinale e giurisprudenziale, in particolare modo per quanto riguarda l’istituto della responsabilità precontrattuale della p.a. Tale ultimo istituto è oggetto di uno specifico approfondimento, che tenterà di analizzare i principali punti cruciali del dibattito: la tutela del legittimo affidamento del privato durante le trattative instaurate con la p.a., la natura giuridica della responsabilità precontrattuale e questioni di stampo più processuale, come il riparto di giurisdizione tra giudice ordinario e amministrativo. Al fine poi di comprendere pienamente le funzioni svolte dal principio di buona fede nell’ambito delle attività pubbliche, sia contrattuali che autoritative, la ricerca si è concentrata anche sulle diverse interpretazioni che gli studiosi del diritto hanno attribuito oggi al giudizio di buona fede, inteso sia come principio civile contrattuale che come nuovo principio amministrativo costituzionalmente orientato. Nel proseguo della dissertazione, infatti, si cerca di valutare l’applicabilità del giudizio di buona fede da un punto di vista interdisciplinare, civile e amministrativo, allo scopo di dimostrare come tale principio sia riuscito lentamente a pervadere anche la materia amministrativa, condizionando in senso garantista non solo il dictum giurisprudenziale, ma anche l’intervento normativo del legislatore.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the scholarly publishing domain, a retraction is raised when a specific publication is considered erroneous by the venue in which it appeared after it was published. The aim of this work is uncovering new insights and learn new important information to help us understand the retraction phenomenon in the arts and humanities domain. Our investigation is based on a methodology defined using quantitative and qualitative measures derived from previous studies in the transdisciplinary research field of “science of science” (SciSci). The designed methodology takes into account a general case of retraction and applies a citation analysis based on five phases. Citations to retracted publications (before and after their retraction) are gathered and characterized with a set of attributes, including general metadata and information extracted from citing entities’ full text. The annotated characteristics are further considered for a statistical and a textual analysis (i.e., a topic modeling analysis). The contribution of this thesis is grounded by addressing the following research questions: (RQ1) How did scholarly research cite retracted humanities publications before and after their retraction? (RQ2) Did all the humanities areas behave similarly concerning the retraction phenomenon? (RQ3) What are the main differences and similarities in the retraction dynamics between the humanities domain and the STEM disciplines? RQ1 and RQ2 are addressed by tuning and applying the methodology on the analysis of the retracted publications in the humanities domain. RQ3 is addressed on two levels, i.e., considering and comparing: (L1) the outcomes of the past studies on the retraction in STEM, and (L2) the results obtained from an analysis of a retraction case in STEM using the defined methodology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As distributed collaborative applications and architectures are adopting policy based management for tasks such as access control, network security and data privacy, the management and consolidation of a large number of policies is becoming a crucial component of such policy based systems. In large-scale distributed collaborative applications like web services, there is the need of analyzing policy interactions and integrating policies. In this thesis, we propose and implement EXAM-S, a comprehensive environment for policy analysis and management, which can be used to perform a variety of functions such as policy property analyses, policy similarity analysis, policy integration etc. As part of this environment, we have proposed and implemented new techniques for the analysis of policies that rely on a deep study of state of the art techniques. Moreover, we propose an approach for solving heterogeneity problems that usually arise when considering the analysis of policies belonging to different domains. Our work focuses on analysis of access control policies written in the dialect of XACML (Extensible Access Control Markup Language). We consider XACML policies because XACML is a rich language which can represent many policies of interest to real world applications and is gaining widespread adoption in the industry.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Animal neocentromeres are defined as ectopic centromeres that have formed in non-centromeric locations and avoid some of the features, like the DNA satellite sequence, that normally characterize canonical centromeres. Despite this, they are stable functional centromeres inherited through generations. The only existence of neocentromeres provide convincing evidence that centromere specification is determined by epigenetic rather than sequence-specific mechanisms. For all this reasons, we used them as simplified models to investigate the molecular mechanisms that underlay the formation and the maintenance of functional centromeres. We collected human cell lines carrying neocentromeres in different positions. To investigate the region involved in the process at the DNA sequence level we applied a recent technology that integrates Chromatin Immuno-Precipitation and DNA microarrays (ChIP-on-chip) using rabbit polyclonal antibodies directed against CENP-A or CENP-C human centromeric proteins. These DNA binding-proteins are required for kinetochore function and are exclusively targeted to functional centromeres. Thus, the immunoprecipitation of DNA bound by these proteins allows the isolation of centromeric sequences, including those of the neocentromeres. Neocentromeres arise even in protein-coding genes region. We further analyzed if the increased scaffold attachment sites and the corresponding tighter chromatin of the region involved in the neocentromerization process still were permissive or not to transcription of within encoded genes. Centromere repositioning is a phenomenon in which a neocentromere arisen without altering the gene order, followed by the inactivation of the canonical centromere, becomes fixed in population. It is a process of chromosome rearrangement fundamental in evolution, at the bases of speciation. The repeat-free region where the neocentromere initially forms, progressively acquires extended arrays of satellite tandem repeats that may contribute to its functional stability. In this view our attention focalized to the repositioned horse ECA11 centromere. ChIP-on-chip analysis was used to define the region involved and SNPs studies, mapping within the region involved into neocentromerization, were carried on. We have been able to describe the structural polymorphism of the chromosome 11 centromeric domain of Caballus population. That polymorphism was seen even between homologues chromosome of the same cells. That discovery was the first described ever. Genomic plasticity had a fundamental role in evolution. Centromeres are not static packaged region of genomes. The key question that fascinates biologists is to understand how that centromere plasticity could be combined to the stability and maintenance of centromeric function. Starting from the epigenetic point of view that underlies centromere formation, we decided to analyze the RNA content of centromeric chromatin. RNA, as well as secondary chemically modifications that involve both histones and DNA, represents a good candidate to guide somehow the centromere formation and maintenance. Many observations suggest that transcription of centromeric DNA or of other non-coding RNAs could affect centromere formation. To date has been no thorough investigation addressing the identity of the chromatin-associated RNAs (CARs) on a global scale. This prompted us to develop techniques to identify CARs in a genome-wide approach using high-throughput genomic platforms. The future goal of this study will be to focalize the attention on what strictly happens specifically inside centromere chromatin.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Engine developers are putting more and more emphasis on the research of maximum thermal and mechanical efficiency in the recent years. Research advances have proven the effectiveness of downsized, turbocharged and direct injection concepts, applied to gasoline combustion systems, to reduce the overall fuel consumption while respecting exhaust emissions limits. These new technologies require more complex engine control units. The sound emitted from a mechanical system encloses many information related to its operating condition and it can be used for control and diagnostic purposes. The thesis shows how the functions carried out from different and specific sensors usually present on-board, can be executed, at the same time, using only one multifunction sensor based on low-cost microphone technology. A theoretical background about sound and signal processing is provided in chapter 1. In modern turbocharged downsized GDI engines, the achievement of maximum thermal efficiency is precluded by the occurrence of knock. Knock emits an unmistakable sound perceived by the human ear like a clink. In chapter 2, the possibility of using this characteristic sound for knock control propose, starting from first experimental assessment tests, to the implementation in a real, production-type engine control unit will be shown. Chapter 3 focus is on misfire detection. Putting emphasis on the low frequency domain of the engine sound spectrum, features related to each combustion cycle of each cylinder can be identified and isolated. An innovative approach to misfire detection, which presents the advantage of not being affected by the road and driveline conditions is introduced. A preliminary study of air path leak detection techniques based on acoustic emissions analysis has been developed, and the first experimental results are shown in chapter 4. Finally, in chapter 5, an innovative detection methodology, based on engine vibration analysis, that can provide useful information about combustion phase is reported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis provides a necessary and sufficient condition for asymptotic efficiency of a nonparametric estimator of the generalised autocovariance function of a Gaussian stationary random process. The generalised autocovariance function is the inverse Fourier transform of a power transformation of the spectral density, and encompasses the traditional and inverse autocovariance functions. Its nonparametric estimator is based on the inverse discrete Fourier transform of the same power transformation of the pooled periodogram. The general result is then applied to the class of Gaussian stationary ARMA processes and its implications are discussed. We illustrate that for a class of contrast functionals and spectral densities, the minimum contrast estimator of the spectral density satisfies a Yule-Walker system of equations in the generalised autocovariance estimator. Selection of the pooling parameter, which characterizes the nonparametric estimator of the generalised autocovariance, controlling its resolution, is addressed by using a multiplicative periodogram bootstrap to estimate the finite-sample distribution of the estimator. A multivariate extension of recently introduced spectral models for univariate time series is considered, and an algorithm for the coefficients of a power transformation of matrix polynomials is derived, which allows to obtain the Wold coefficients from the matrix coefficients characterizing the generalised matrix cepstral models. This algorithm also allows the definition of the matrix variance profile, providing important quantities for vector time series analysis. A nonparametric estimator based on a transformation of the smoothed periodogram is proposed for estimation of the matrix variance profile.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Changing or creating an organisation means creating a new process. Each process involves many risks that need to be identified and managed. The main risks considered here are procedural and legal risks. The former are related to the risks of errors that may occur during processes, while the latter are related to the compliance of processes with regulations. Managing the risks implies proposing changes to the processes that allow the desired result: an optimised process. In order to manage a company and optimise it in the best possible way, not only should the organisational aspect, risk management and legal compliance be taken into account, but it is important that they are all analysed simultaneously with the aim of finding the right balance that satisfies them all. This is the aim of this thesis, to provide methods and tools to balance these three characteristics, and to enable this type of optimisation, ICT support is used. This work isn’t a thesis in computer science or law, but rather an interdisciplinary thesis. Most of the work done so far is vertical and in a specific domain. The particularity and aim of this thesis is not to carry out an in-depth analysis of a particular aspect, but rather to combine several important aspects, normally analysed separately, which however have an impact and influence each other. In order to carry out this kind of interdisciplinary analysis, the knowledge base of both areas was involved and the combination and collaboration of different experts in the various fields was necessary. Although the methodology described is generic and can be applied to all sectors, the case study considered is a new type of healthcare service that allows patients in acute disease to be hospitalised to their home. This provide the possibility to perform experiments using real hospital database.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent decades, two prominent trends have influenced the data modeling field, namely network analysis and machine learning. This thesis explores the practical applications of these techniques within the domain of drug research, unveiling their multifaceted potential for advancing our comprehension of complex biological systems. The research undertaken during this PhD program is situated at the intersection of network theory, computational methods, and drug research. Across six projects presented herein, there is a gradual increase in model complexity. These projects traverse a diverse range of topics, with a specific emphasis on drug repurposing and safety in the context of neurological diseases. The aim of these projects is to leverage existing biomedical knowledge to develop innovative approaches that bolster drug research. The investigations have produced practical solutions, not only providing insights into the intricacies of biological systems, but also allowing the creation of valuable tools for their analysis. In short, the achievements are: • A novel computational algorithm to identify adverse events specific to fixed-dose drug combinations. • A web application that tracks the clinical drug research response to SARS-CoV-2. • A Python package for differential gene expression analysis and the identification of key regulatory "switch genes". • The identification of pivotal events causing drug-induced impulse control disorders linked to specific medications. • An automated pipeline for discovering potential drug repurposing opportunities. • The creation of a comprehensive knowledge graph and development of a graph machine learning model for predictions. Collectively, these projects illustrate diverse applications of data science and network-based methodologies, highlighting the profound impact they can have in supporting drug research activities.