32 resultados para Model Based Development


Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the previous 10 years, global R&D expenditure in the pharmaceuticals and biotechnology sector has steadily increased, without a corresponding increase in output of new medicines. To address this situation, the biopharmaceutical industry's greatest need is to predict the failures at the earliest possible stage of the drug development process. A major key to reducing failures in drug screenings is the development and use of preclinical models that are more predictive of efficacy and safety in clinical trials. Further, relevant animal models are needed to allow a wider testing of novel hypotheses. Key to this is the developing, refining, and validating of complex animal models that directly link therapeutic targets to the phenotype of disease, allowing earlier prediction of human response to medicines and identification of safety biomarkers. Morehover, well-designed animal studies are essential to bridge the gap between test in cell cultures and people. Zebrafish is emerging, complementary to other models, as a powerful system for cancer studies and drugs discovery. We aim to investigate this research area designing a new preclinical cancer model based on the in vivo imaging of zebrafish embryogenesis. Technological advances in imaging have made it feasible to acquire nondestructive in vivo images of fluorescently labeled structures, such as cell nuclei and membranes, throughout early Zebrafishsh embryogenesis. This In vivo image-based investigation provides measurements for a large number of features at cellular level and events including nuclei movements, cells counting, and mitosis detection, thereby enabling the estimation of more significant parameters such as proliferation rate, highly relevant for investigating anticancer drug effects. In this work, we designed a standardized procedure for accessing drug activity at the cellular level in live zebrafish embryos. The procedure includes methodologies and tools that combine imaging and fully automated measurements of embryonic cell proliferation rate. We achieved proliferation rate estimation through the automatic classification and density measurement of epithelial enveloping layer and deep layer cells. Automatic embryonic cells classification provides the bases to measure the variability of relevant parameters, such as cell density, in different classes of cells and is finalized to the estimation of efficacy and selectivity of anticancer drugs. Through these methodologies we were able to evaluate and to measure in vivo the therapeutic potential and overall toxicity of Dbait and Irinotecan anticancer molecules. Results achieved on these anticancer molecules are presented and discussed; furthermore, extensive accuracy measurements are provided to investigate the robustness of the proposed procedure. Altogether, these observations indicate that zebrafish embryo can be a useful and cost-effective alternative to some mammalian models for the preclinical test of anticancer drugs and it might also provides, in the near future, opportunities to accelerate the process of drug discovery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

From the institutional point of view, the legal system of IPR (intellectual property right, hereafter, IPR) is one of incentive institutions of innovation and it plays very important role in the development of economy. According to the law, the owner of the IPR enjoy a kind of exclusive right to use his IP(intellectual property, hereafter, IP), in other words, he enjoys a kind of legal monopoly position in the market. How to well protect the IPR and at the same time to regulate the abuse of IPR is very interested topic in this knowledge-orientated market and it is the basic research question in this dissertation. In this paper, by way of comparing study and by way of law and economic analyses, and based on the Austrian Economics School’s theories, the writer claims that there is no any contradiction between the IPR and competition law. However, in this new economy (high-technology industries), there is really probability of the owner of IPR to abuse his dominant position. And with the characteristics of the new economy, such as, the high rates of innovation, “instant scalability”, network externality and lock-in effects, the IPR “will vest the dominant undertakings with the power not just to monopolize the market but to shift such power from one market to another, to create strong barriers to enter and, in so doing, granting the perpetuation of such dominance for quite a long time.”1 Therefore, in order to keep the order of market, to vitalize the competition and innovation, and to benefit the customer, in EU and US, it is common ways to apply the competition law to regulate the IPR abuse. In Austrian Economic School perspective, especially the Schumpeterian theories, the innovation/competition/monopoly and entrepreneurship are inter-correlated, therefore, we should apply the dynamic antitrust model based on the AES theories to analysis the relationship between the IPR and competition law. China is still a developing country with relative not so high ability of innovation. Therefore, at present, to protect the IPR and to make good use of the incentive mechanism of IPR legal system is the first important task for Chinese government to do. However, according to the investigation reports,2 based on their IPR advantage and capital advantage, some multinational companies really obtained the dominant or monopoly market position in some aspects of some industries, and there are some IPR abuses conducted by such multinational companies. And then, the Chinese government should be paying close attention to regulate any IPR abuse. However, how to effectively regulate the IPR abuse by way of competition law in Chinese situation, from the law and economic theories’ perspective, from the legislation perspective, and from the judicial practice perspective, there is a long way for China to go!

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Particulate matter is one of the main atmospheric pollutants, with a great chemical-environmental relevance. Improving knowledge of the sources of particulate matter and of their apportionment is needed to handle and fulfill the legislation regarding this pollutant, to support further development of air policy as well as air pollution management. Various instruments have been used to understand the sources of particulate matter and atmospheric radiotracers at the site of Mt. Cimone (44.18° N, 10.7° E, 2165 m asl), hosting a global WMO-GAW station. Thanks to its characteristics, this location is suitable investigate the regional and long-range transport of polluted air masses on the background Southern-Europe free-troposphere. In particular, PM10 data sampled at the station in the period 1998-2011 were analyzed in the framework of the main meteorological and territorial features. A receptor model based on back trajectories was applied to study the source regions of particulate matter. Simultaneous measurements of atmospheric radionuclides Pb-210 and Be-7 acquired together with PM10 have also been analysed to acquire a better understanding of vertical and horizontal transports able to affect atmospheric composition. Seasonal variations of atmospheric radiotracers have been studied both analysing the long-term time series acquired at the measurement site as well as by means of a state-of-the-art global 3-D chemistry and transport model. Advection patterns characterizing the circulation at the site have been identified by means of clusters of back-trajectories. Finally, the results of a source apportionment study of particulate matter carried on in a midsize town of the Po Valley (actually recognised as one of the most polluted European regions) are reported. An approach exploiting different techniques, and in particular different kinds of models, successfully achieved a characterization of the processes/sources of particulate matter at the two sites, and of atmospheric radiotracers at the site of Mt. Cimone.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Entro l’approccio concettuale e metodologico transdisciplinare della Scienza della Sostenibilità, la presente tesi elabora un background teorico per concettualizzare una definizione di sostenibilità sulla cui base proporre un modello di sviluppo alternativo a quello dominante, declinato in termini di proposte concrete entro il caso-studio di regolazione europea in materia di risparmio energetico. La ricerca, attraverso un’analisi transdisciplinare, identifica una crisi strutturale del modello di sviluppo dominante basato sulla crescita economica quale (unico) indicatore di benessere e una crisi valoriale. L’attenzione si concentra quindi sull’individuazione di un paradigma idoneo a rispondere alle criticità emerse dall’analisi. A tal fine vengono esaminati i concetti di sviluppo sostenibile e di sostenibilità, arrivando a proporre un nuovo paradigma (la “sostenibilità ecosistemica”) che dia conto dell’impossibilità di una crescita infinita su un sistema caratterizzato da risorse limitate. Vengono poi presentate delle proposte per un modello di sviluppo sostenibile alternativo a quello dominante. Siffatta elaborazione teorica viene declinata in termini concreti mediante l’elaborazione di un caso-studio. A tal fine, viene innanzitutto analizzata la funzione della regolazione come strumento per garantire l’applicazione pratica del modello teorico. L’attenzione è concentrata sul caso-studio rappresentato dalla politica e regolazione dell’Unione Europea in materia di risparmio ed efficienza energetica. Dall’analisi emerge una progressiva commistione tra i due concetti di risparmio energetico ed efficienza energetica, per la quale vengono avanzate delle motivazioni e individuati dei rischi in termini di effetti rebound. Per rispondere alle incongruenze tra obiettivo proclamato dall’Unione Europea di riduzione dei consumi energetici e politica effettivamente perseguita, viene proposta una forma di “regolazione per la sostenibilità” in ambito abitativo residenziale che, promuovendo la condivisione dei servizi energetici, recuperi il significato proprio di risparmio energetico come riduzione del consumo mediante cambiamenti di comportamento, arricchendolo di una nuova connotazione come “bene relazionale” per la promozione del benessere relazionale ed individuale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditional cell culture models have limitations in extrapolating functional mechanisms that underlie strategies of microbial virulence. Indeed during the infection the pathogens adapt to different tissue-specific environmental factors. The development of in vitro models resembling human tissue physiology might allow the replacement of inaccurate or aberrant animal models. Three-dimensional (3D) cell culture systems are more reliable and more predictive models that can be used for the meaningful dissection of host–pathogen interactions. The lung and gut mucosae often represent the first site of exposure to pathogens and provide a physical barrier against their entry. Within this context, the tracheobronchial and small intestine tract were modelled by tissue engineering approach. The main work was focused on the development and the extensive characterization of a human organotypic airway model, based on a mechanically supported co-culture of normal primary cells. The regained morphological features, the retrieved environmental factors and the presence of specific epithelial subsets resembled the native tissue organization. In addition, the respiratory model enabled the modular insertion of interesting cell types, such as innate immune cells or multipotent stromal cells, showing a functional ability to release pertinent cytokines differentially. Furthermore this model responded imitating known events occurring during the infection by Non-typeable H. influenzae. Epithelial organoid models, mimicking the small intestine tract, were used for a different explorative analysis of tissue-toxicity. Further experiments led to detection of a cell population targeted by C. difficile Toxin A and suggested a role in the impairment of the epithelial homeostasis by the bacterial virulence machinery. The described cell-centered strategy can afford critical insights in the evaluation of the host defence and pathogenic mechanisms. The application of these two models may provide an informing step that more coherently defines relevant molecular interactions happening during the infection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Il progetto di ricerca è finalizzato allo sviluppo di una metodologia innovativa di supporto decisionale nel processo di selezione tra alternative progettuali, basata su indicatori di prestazione. In particolare il lavoro si è focalizzato sulla definizione d’indicatori atti a supportare la decisione negli interventi di sbottigliamento di un impianto di processo. Sono stati sviluppati due indicatori, “bottleneck indicators”, che permettono di valutare la reale necessità dello sbottigliamento, individuando le cause che impediscono la produzione e lo sfruttamento delle apparecchiature. Questi sono stati validati attraverso l’applicazione all’analisi di un intervento su un impianto esistente e verificando che lo sfruttamento delle apparecchiature fosse correttamente individuato. Definita la necessità dell’intervento di sbottigliamento, è stato affrontato il problema della selezione tra alternative di processo possibili per realizzarlo. È stato applicato alla scelta un metodo basato su indicatori di sostenibilità che consente di confrontare le alternative considerando non solo il ritorno economico degli investimenti ma anche gli impatti su ambiente e sicurezza, e che è stato ulteriormente sviluppato in questa tesi. Sono stati definiti due indicatori, “area hazard indicators”, relativi alle emissioni fuggitive, per integrare questi aspetti nell’analisi della sostenibilità delle alternative. Per migliorare l’accuratezza nella quantificazione degli impatti è stato sviluppato un nuovo modello previsionale atto alla stima delle emissioni fuggitive di un impianto, basato unicamente sui dati disponibili in fase progettuale, che tiene conto delle tipologie di sorgenti emettitrici, dei loro meccanismi di perdita e della manutenzione. Validato mediante il confronto con dati sperimentali di un impianto produttivo, si è dimostrato che tale metodo è indispensabile per un corretto confronto delle alternative poiché i modelli esistenti sovrastimano eccessivamente le emissioni reali. Infine applicando gli indicatori ad un impianto esistente si è dimostrato che sono fondamentali per semplificare il processo decisionale, fornendo chiare e precise indicazioni impiegando un numero limitato di informazioni per ricavarle.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I tetti verdi rappresentano, sempre più frequentemente, una tecnologia idonea alla mitigazione alle problematiche connesse all’ urbanizzazione, tuttavia la conoscenza delle prestazioni dei GR estensivi in clima sub-Mediterraneo è ancora limitata. La presente ricerca è supportata da 15 mesi di analisi sperimentali su due GR situati presso la Scuola di Ingegneria di Bologna. Inizialmente vengono comparate, tra loro e rispetto a una superficie di riferimento (RR), le prestazioni idrologiche ed energetiche dei due GR, caratterizzati da vegetazione a Sedum (SR) e a erbe native perenni (NR). Entrambi riducono i volumi defluiti e le temperature superficiali. Il NR si dimostra migliore del SR sia in campo idrologico che termico, la fisiologia della vegetazione del NR determina l'apertura diurna degli stomi e conseguentemente una maggiore evapotraspirazione (ET). Successivamente si sono studiate la variazioni giornaliere di umidità nel substrato del SR riscontrando che la loro ampiezza è influenzata dalla temperatura, dall’umidità iniziale e dalla fase vegetativa. Queste sono state simulate mediante un modello idrologico basato sull'equazione di bilancio idrico e su due modelli convenzionali per la stima della ET potenziale combinati con una funzione di estrazione dell’ umidità dal suolo. Sono stati proposti dei coefficienti di correzione, ottenuti per calibrazione, per considerare le differenze tra la coltura di riferimento e le colture nei GR durante le fasi di crescita. Infine, con l’ausilio di un modello implementato in SWMM 5.1. 007 utilizzando il modulo Low Impact Development (LID) durante simulazioni in continuo (12 mesi) si sono valutate le prestazioni in termini di ritenzione dei plot SR e RR. Il modello, calibrato e validato, mostra di essere in grado di riprodurre in modo soddisfacente i volumi defluiti dai due plot. Il modello, a seguito di una dettagliata calibrazione, potrebbe supportare Ingegneri e Amministrazioni nella valutazioni dei vantaggi derivanti dall'utilizzo dei GR.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Entre la fin du Néolithique et l’âge du Bronze, la présence d’habitats groupés de type village est un phénomène diffus, tant en Italie qu’en France méridionale. Néanmoins, la prise en compte de la variabilité des formes de la stratification des sites interroge. En quoi l’enregistrement sédimentaire des sols d’habitat permet-il d’appréhender la question de l’organisation villageoise et de sa variabilité entre la fin du Néolithique et l’âge du Bronze ? Quelle image cet enregistrement sédimentaire donne-t-il de l’organisation sociale et économique du village ? Afin d’aborder ces questions, nous avons choisi de mener une étude géoarchéologique sur des sites de formes différentes, issus de contextes chrono-culturels et environnementaux variés. La démarche, fondée sur l’emploi de la micromorphologie des sols en tant qu’outil analytique, vise à caractériser l’organisation spatio-temporelle des sols d’occupation à l’échelle du site, selon une approche spatiale des processus de formation de la stratification archéologique. L’élaboration d’un modèle, qui repose sur une classification des micro-faciès sédimentaires selon le système d’activité, et son application à des sites-laboratoires permettent de qualifier les techniques de construction en terre, l’usage du sol et les dynamiques d’occupation propres à chaque site, dans le but de déterminer les comportements socio-économiques et les spécificités du mode de vie villageois enregistrées par les sols. Cette approche permet d’évaluer les constantes et les variables qui qualifient les différents types d’occupation. Le sol, conçu comme matérialité de l’espace villageois, devient ainsi un témoignage direct de la variabilité culturelle et des différentes formes d’organisation des communautés de la fin du Néolithique et de l’âge du Bronze.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The steadily growing immigration phenomenon in today’s Japan is showing a tangible and expanding presence of immigrant-origin youths residing in the country. International research in the migration studies area has underlined the importance of focusing on immigrant-origin youths to shed light on the character of the way immigrant incorporate in countries of destinations. In-deed, immigrants’ offspring, the adults of tomorrow, embody the interlocutor between first-generation immigrants and the receiving societal context. The extent of the presence of immigrants’ children in countries of destination is also a reliable yardstick to assess the maturation of the migration process, transforming it from a temporary phenomenon to a long-term settlement. Within this framework, the school is a privileged site to observe and analyze immigrant-origin youths’ integration. Alongside their family and peers, school constitutes one of the main agents of socialization. Here, children learn norms and rules and acquire the necessary tools to eventually compete in the pursuit of an occupation, determining their future socioeconomic standing. This doctoral research aims to identify which theoretical model articulated in the area of migration studies best describes the adaptation process of immigrant-origin youths in Japan. In particular, it examines whether (and to what extent) any of the pre-existing frameworks can help explain the Japanese occurring circumstances, or whether further elaboration and adjustment are needed. Alternatively, it studies whether it is necessary to produce a new model based on the peculiarities of the Japanese social context. This study provides a theoretical-oriented contribution to the (mainly descriptive but maturing) literature on immigrant-origin youths’ integration in Japan. Considering past growth trends of Japanese immigration and its expanding prospective projections (Korekawa 2018c), this study might be considered pioneering for future development of the phenomenon.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis investigates the potential of photoactive organic semiconductors as a new class of materials for developing bioelectronic devices that can convert light into biological signals. The materials can be either small molecules or polymers. When these materials interact with aqueous biological fluids, they give rise to various electrochemical phenomena, including photofaradaic or photocapacitive processes, depending on whether photogenerated charges participate in redox processes or accumulate at an interface. The thesis starts by studying the behavior of the H2Pc/PTCDI molecular p/n thin-film heterojunction in contact with aqueous electrolyte. An equivalent circuit model is developed, explaining the measurements and predicting behavior in wireless mode. A systematic study on p-type polymeric thin-films is presented, comparing rr-P3HT with two low bandgap conjugated polymers: PBDB-T and PTB7. The results demonstrate that PTB7 has superior photocurrent performance due to more effective electron-transfer onto acceptor states in solution. Furthermore, the thesis addresses the issue of photovoltage generation for wireless photoelectrodes. An analytical model based on photoactivated charge-transfer across the organic-semiconductor/water interface is developed, explaining the large photovoltages observed for polymeric p-type semiconductor electrodes in water. Then, flash-precipitated nanoparticles made of the same three photoactive polymers are investigated, assessing the influence of fabrication parameters on the stability, structure, and energetics of the nanoparticles. Photocathodic current generation and consequent positive charge accumulation is also investigated. Additionally, newly developed porous P3HT thin-films are tested, showing that porosity increases both the photocurrent and the semiconductor/water interfacial capacity. Finally, the thesis demonstrates the biocompatibility of the materials in in-vitro experiments and shows safe levels of photoinduced intracellular ROS production with p-type polymeric thin-films and nanoparticles. The findings highlight the potential of photoactive organic semiconductors in the development of optobioelectronic devices, demonstrating their ability to convert light into biological signals and interface with biological fluids.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Earthquake prediction is a complex task for scientists due to the rare occurrence of high-intensity earthquakes and their inaccessible depths. Despite this challenge, it is a priority to protect infrastructure, and populations living in areas of high seismic risk. Reliable forecasting requires comprehensive knowledge of seismic phenomena. In this thesis, the development, application, and comparison of both deterministic and probabilistic forecasting methods is shown. Regarding the deterministic approach, the implementation of an alarm-based method using the occurrence of strong (fore)shocks, widely felt by the population, as a precursor signal is described. This model is then applied for retrospective prediction of Italian earthquakes of magnitude M≥5.0,5.5,6.0, occurred in Italy from 1960 to 2020. Retrospective performance testing is carried out using tests and statistics specific to deterministic alarm-based models. Regarding probabilistic models, this thesis focuses mainly on the EEPAS and ETAS models. Although the EEPAS model has been previously applied and tested in some regions of the world, it has never been used for forecasting Italian earthquakes. In the thesis, the EEPAS model is used to retrospectively forecast Italian shallow earthquakes with a magnitude of M≥5.0 using new MATLAB software. The forecasting performance of the probabilistic models was compared to other models using CSEP binary tests. The EEPAS and ETAS models showed different characteristics for forecasting Italian earthquakes, with EEPAS performing better in the long-term and ETAS performing better in the short-term. The FORE model based on strong precursor quakes is compared to EEPAS and ETAS using an alarm-based deterministic approach. All models perform better than a random forecasting model, with ETAS and FORE models showing better performance. However, to fully evaluate forecasting performance, prospective tests should be conducted. The lack of objective tests for evaluating deterministic models and comparing them with probabilistic ones was a challenge faced during the study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.