28 resultados para Simulation-based methods

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent years observed massive growth in wearable technology, everything can be smart: phones, watches, glasses, shirts, etc. These technologies are prevalent in various fields: from wellness/sports/fitness to the healthcare domain. The spread of this phenomenon led the World-Health-Organization to define the term 'mHealth' as "medical and public health practice supported by mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants, and other wireless devices". Furthermore, mHealth solutions are suitable to perform real-time wearable Biofeedback (BF) systems: sensors in the body area network connected to a processing unit (smartphone) and a feedback device (loudspeaker) to measure human functions and return them to the user as (bio)feedback signal. During the COVID-19 pandemic, this transformation of the healthcare system has been dramatically accelerated by new clinical demands, including the need to prevent hospital surges and to assure continuity of clinical care services, allowing pervasive healthcare. Never as of today, we can say that the integration of mHealth technologies will be the basis of this new era of clinical practice. In this scenario, this PhD thesis's primary goal is to investigate new and innovative mHealth solutions for the Assessment and Rehabilitation of different neuromotor functions and diseases. For the clinical assessment, there is the need to overcome the limitations of subjective clinical scales. Creating new pervasive and self-administrable mHealth solutions, this thesis investigates the possibility of employing innovative systems for objective clinical evaluation. For rehabilitation, we explored the clinical feasibility and effectiveness of mHealth systems. In particular, we developed innovative mHealth solutions with BF capability to allow tailored rehabilitation. The main goal that a mHealth-system should have is improving the person's quality of life, increasing or maintaining his autonomy and independence. To this end, inclusive design principles might be crucial, next to the technical and technological ones, to improve mHealth-systems usability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays robotic applications are widespread and most of the manipulation tasks are efficiently solved. However, Deformable-Objects (DOs) still represent a huge limitation for robots. The main difficulty in DOs manipulation is dealing with the shape and dynamics uncertainties, which prevents the use of model-based approaches (since they are excessively computationally complex) and makes sensory data difficult to interpret. This thesis reports the research activities aimed to address some applications in robotic manipulation and sensing of Deformable-Linear-Objects (DLOs), with particular focus to electric wires. In all the works, a significant effort was made in the study of an effective strategy for analyzing sensory signals with various machine learning algorithms. In the former part of the document, the main focus concerns the wire terminals, i.e. detection, grasping, and insertion. First, a pipeline that integrates vision and tactile sensing is developed, then further improvements are proposed for each module. A novel procedure is proposed to gather and label massive amounts of training images for object detection with minimal human intervention. Together with this strategy, we extend a generic object detector based on Convolutional-Neural-Networks for orientation prediction. The insertion task is also extended by developing a closed-loop control capable to guide the insertion of a longer and curved segment of wire through a hole, where the contact forces are estimated by means of a Recurrent-Neural-Network. In the latter part of the thesis, the interest shifts to the DLO shape. Robotic reshaping of a DLO is addressed by means of a sequence of pick-and-place primitives, while a decision making process driven by visual data learns the optimal grasping locations exploiting Deep Q-learning and finds the best releasing point. The success of the solution leverages on a reliable interpretation of the DLO shape. For this reason, further developments are made on the visual segmentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topic of this work concerns nonparametric permutation-based methods aiming to find a ranking (stochastic ordering) of a given set of groups (populations), gathering together information from multiple variables under more than one experimental designs. The problem of ranking populations arises in several fields of science from the need of comparing G>2 given groups or treatments when the main goal is to find an order while taking into account several aspects. As it can be imagined, this problem is not only of theoretical interest but it also has a recognised relevance in several fields, such as industrial experiments or behavioural sciences, and this is reflected by the vast literature on the topic, although sometimes the problem is associated with different keywords such as: "stochastic ordering", "ranking", "construction of composite indices" etc., or even "ranking probabilities" outside of the strictly-speaking statistical literature. The properties of the proposed method are empirically evaluated by means of an extensive simulation study, where several aspects of interest are let to vary within a reasonable practical range. These aspects comprise: sample size, number of variables, number of groups, and distribution of noise/error. The flexibility of the approach lies mainly in the several available choices for the test-statistic and in the different types of experimental design that can be analysed. This render the method able to be tailored to the specific problem and the to nature of the data at hand. To perform the analyses an R package called SOUP (Stochastic Ordering Using Permutations) has been written and it is available on CRAN.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The simulation of ultrafast photoinduced processes is a fundamental step towards the understanding of the underlying molecular mechanism and interpretation/prediction of experimental data. Performing a computer simulation of a complex photoinduced process is only possible introducing some approximations but, in order to obtain reliable results, the need to reduce the complexity must balance with the accuracy of the model, which should include all the relevant degrees of freedom and a quantitatively correct description of the electronic states involved in the process. This work presents new computational protocols and strategies for the parameterisation of accurate models for photochemical/photophysical processes based on state-of-the-art multiconfigurational wavefunction-based methods. The required ingredients for a dynamics simulation include potential energy surfaces (PESs) as well as electronic state couplings, which must be mapped across the wide range of geometries visited during the wavepacket/trajectory propagation. The developed procedures allow to obtain solid and extended databases reducing as much as possible the computational cost, thanks to, e.g., specific tuning of the level of theory for different PES regions and/or direct calculation of only the needed components of vectorial quantities (like gradients or nonadiabatic couplings). The presented approaches were applied to three case studies (azobenzene, pyrene, visual rhodopsin), all requiring an accurate parameterisation but for different reasons. The resulting models and simulations allowed to elucidate the mechanism and time scale of the internal conversion, reproducing or even predicting new transient experiments. The general applicability of the developed protocols to systems with different peculiarities and the possibility to parameterise different types of dynamics on an equal footing (classical vs purely quantum) prove that the developed procedures are flexible enough to be tailored for each specific system, and pave the way for exact quantum dynamics with multiple degrees of freedom.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There are different ways to do cluster analysis of categorical data in the literature and the choice among them is strongly related to the aim of the researcher, if we do not take into account time and economical constraints. Main approaches for clustering are usually distinguished into model-based and distance-based methods: the former assume that objects belonging to the same class are similar in the sense that their observed values come from the same probability distribution, whose parameters are unknown and need to be estimated; the latter evaluate distances among objects by a defined dissimilarity measure and, basing on it, allocate units to the closest group. In clustering, one may be interested in the classification of similar objects into groups, and one may be interested in finding observations that come from the same true homogeneous distribution. But do both of these aims lead to the same clustering? And how good are clustering methods designed to fulfil one of these aims in terms of the other? In order to answer, two approaches, namely a latent class model (mixture of multinomial distributions) and a partition around medoids one, are evaluated and compared by Adjusted Rand Index, Average Silhouette Width and Pearson-Gamma indexes in a fairly wide simulation study. Simulation outcomes are plotted in bi-dimensional graphs via Multidimensional Scaling; size of points is proportional to the number of points that overlap and different colours are used according to the cluster membership.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The continuous advancements and enhancements of wireless systems are enabling new compelling scenarios where mobile services can adapt according to the current execution context, represented by the computational resources available at the local device, current physical location, people in physical proximity, and so forth. Such services called context-aware require the timely delivery of all relevant information describing the current context, and that introduces several unsolved complexities, spanning from low-level context data transmission up to context data storage and replication into the mobile system. In addition, to ensure correct and scalable context provisioning, it is crucial to integrate and interoperate with different wireless technologies (WiFi, Bluetooth, etc.) and modes (infrastructure-based and ad-hoc), and to use decentralized solutions to store and replicate context data on mobile devices. These challenges call for novel middleware solutions, here called Context Data Distribution Infrastructures (CDDIs), capable of delivering relevant context data to mobile devices, while hiding all the issues introduced by data distribution in heterogeneous and large-scale mobile settings. This dissertation thoroughly analyzes CDDIs for mobile systems, with the main goal of achieving a holistic approach to the design of such type of middleware solutions. We discuss the main functions needed by context data distribution in large mobile systems, and we claim the precise definition and clean respect of quality-based contracts between context consumers and CDDI to reconfigure main middleware components at runtime. We present the design and the implementation of our proposals, both in simulation-based and in real-world scenarios, along with an extensive evaluation that confirms the technical soundness of proposed CDDI solutions. Finally, we consider three highly heterogeneous scenarios, namely disaster areas, smart campuses, and smart cities, to better remark the wide technical validity of our analysis and solutions under different network deployments and quality constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ideal approach for the long term treatment of intestinal disorders, such as inflammatory bowel disease (IBD), is represented by a safe and well tolerated therapy able to reduce mucosal inflammation and maintain homeostasis of the intestinal microbiota. A combined therapy with antimicrobial agents, to reduce antigenic load, and immunomodulators, to ameliorate the dysregulated responses, followed by probiotic supplementation has been proposed. Because of the complementary mechanisms of action of antibiotics and probiotics, a combined therapeutic approach would give advantages in terms of enlargement of the antimicrobial spectrum, due to the barrier effect of probiotic bacteria, and limitation of some side effects of traditional chemiotherapy (i.e. indiscriminate decrease of aggressive and protective intestinal bacteria, altered absorption of nutrient elements, allergic and inflammatory reactions). Rifaximin (4-deoxy-4’-methylpyrido[1’,2’-1,2]imidazo[5,4-c]rifamycin SV) is a product of synthesis experiments designed to modify the parent compound, rifamycin, in order to achieve low gastrointestinal absorption while retaining good antibacterial activity. Both experimental and clinical pharmacology clearly show that this compound is a non systemic antibiotic with a broad spectrum of antibacterial action, covering Gram-positive and Gram-negative organisms, both aerobes and anaerobes. Being virtually non absorbed, its bioavailability within the gastrointestinal tract is rather high with intraluminal and faecal drug concentrations that largely exceed the MIC values observed in vitro against a wide range of pathogenic microorganisms. The gastrointestinal tract represents therefore the primary therapeutic target and gastrointestinal infections the main indication. The little value of rifaximin outside the enteric area minimizes both antimicrobial resistance and systemic adverse events. Fermented dairy products enriched with probiotic bacteria have developed into one of the most successful categories of functional foods. Probiotics are defined as “live microorganisms which, when administered in adequate amounts, confer a health benefit on the host” (FAO/WHO, 2002), and mainly include Lactobacillus and Bifidobacterium species. Probiotic bacteria exert a direct effect on the intestinal microbiota of the host and contribute to organoleptic, rheological and nutritional properties of food. Administration of pharmaceutical probiotic formula has been associated with therapeutic effects in treatment of diarrhoea, constipation, flatulence, enteropathogens colonization, gastroenteritis, hypercholesterolemia, IBD, such as ulcerative colitis (UC), Crohn’s disease, pouchitis and irritable bowel syndrome. Prerequisites for probiotics are to be effective and safe. The characteristics of an effective probiotic for gastrointestinal tract disorders are tolerance to upper gastrointestinal environment (resistance to digestion by enteric or pancreatic enzymes, gastric acid and bile), adhesion on intestinal surface to lengthen the retention time, ability to prevent the adherence, establishment and/or replication of pathogens, production of antimicrobial substances, degradation of toxic catabolites by bacterial detoxifying enzymatic activities, and modulation of the host immune responses. This study was carried out using a validated three-stage fermentative continuous system and it is aimed to investigate the effect of rifaximin on the colonic microbial flora of a healthy individual, in terms of bacterial composition and production of fermentative metabolic end products. Moreover, this is the first study that investigates in vitro the impact of the simultaneous administration of the antibiotic rifaximin and the probiotic B. lactis BI07 on the intestinal microbiota. Bacterial groups of interest were evaluated using culture-based methods and molecular culture-independent techniques (FISH, PCR-DGGE). Metabolic outputs in terms of SCFA profiles were determined by HPLC analysis. Collected data demonstrated that rifaximin as well as antibiotic and probiotic treatment did not change drastically the intestinal microflora, whereas bacteria belonging to Bifidobacterium and Lactobacillus significantly increase over the course of the treatment, suggesting a spontaneous upsurge of rifaximin resistance. These results are in agreement with a previous study, in which it has been demonstrated that rifaximin administration in patients with UC, affects the host with minor variations of the intestinal microflora, and that the microbiota is restored over a wash-out period. In particular, several Bifidobacterium rifaximin resistant mutants could be isolated during the antibiotic treatment, but they disappeared after the antibiotic suspension. Furthermore, bacteria belonging to Atopobium spp. and E. rectale/Clostridium cluster XIVa increased significantly after rifaximin and probiotic treatment. Atopobium genus and E. rectale/Clostridium cluster XIVa are saccharolytic, butyrate-producing bacteria, and for these characteristics they are widely considered health-promoting microorganisms. The absence of major variations in the intestinal microflora of a healthy individual and the significant increase in probiotic and health-promoting bacteria concentrations support the rationale of the administration of rifaximin as efficacious and non-dysbiosis promoting therapy and suggest the efficacy of an antibiotic/probiotic combined treatment in several gut pathologies, such as IBD. To assess the use of an antibiotic/probiotic combination for clinical management of intestinal disorders, genetic, proteomic and physiologic approaches were employed to elucidate molecular mechanisms determining rifaximin resistance in Bifidobacterium, and the expected interactions occurring in the gut between these bacteria and the drug. The ability of an antimicrobial agent to select resistance is a relevant factor that affects its usefulness and may diminish its useful life. Rifaximin resistance phenotype was easily acquired by all bifidobacteria analyzed [type strains of the most representative intestinal bifidobacterial species (B. infantis, B. breve, B. longum, B. adolescentis and B. bifidum) and three bifidobacteria included in a pharmaceutical probiotic preparation (B. lactis BI07, B. breve BBSF and B. longum BL04)] and persisted for more than 400 bacterial generations in the absence of selective pressure. Exclusion of any reversion phenomenon suggested two hypotheses: (i) stable and immobile genetic elements encode resistance; (ii) the drug moiety does not act as an inducer of the resistance phenotype, but enables selection of resistant mutants. Since point mutations in rpoB have been indicated as representing the principal factor determining rifampicin resistance in E. coli and M. tuberculosis, whether a similar mechanism also occurs in Bifidobacterium was verified. The analysis of a 129 bp rpoB core region of several wild-type and resistant bifidobacteria revealed five different types of miss-sense mutations in codons 513, 516, 522 and 529. Position 529 was a novel mutation site, not previously described, and position 522 appeared interesting for both the double point substitutions and the heterogeneous profile of nucleotide changes. The sequence heterogeneity of codon 522 in Bifidobacterium leads to hypothesize an indirect role of its encoded amino acid in the binding with the rifaximin moiety. These results demonstrated the chromosomal nature of rifaximin resistance in Bifidobacterium, minimizing risk factors for horizontal transmission of resistance elements between intestinal microbial species. Further proteomic and physiologic investigations were carried out using B. lactis BI07, component of a pharmaceutical probiotic preparation, as a model strain. The choice of this strain was determined based on the following elements: (i) B. lactis BI07 is able to survive and persist in the gut; (ii) a proteomic overview of this strain has been recently reported. The involvement of metabolic changes associated with rifaximin resistance was investigated by proteomic analysis performed with two-dimensional electrophoresis and mass spectrometry. Comparative proteomic mapping of BI07-wt and BI07-res revealed that most differences in protein expression patterns were genetically encoded rather than induced by antibiotic exposure. In particular, rifaximin resistance phenotype was characterized by increased expression levels of stress proteins. Overexpression of stress proteins was expected, as they represent a common non specific response by bacteria when stimulated by different shock conditions, including exposure to toxic agents like heavy metals, oxidants, acids, bile salts and antibiotics. Also, positive transcription regulators were found to be overexpressed in BI07-res, suggesting that bacteria could activate compensatory mechanisms to assist the transcription process in the presence of RNA polymerase inhibitors. Other differences in expression profiles were related to proteins involved in central metabolism; these modifications suggest metabolic disadvantages of resistant mutants in comparison with sensitive bifidobacteria in the gut environment, without selective pressure, explaining their disappearance from faeces of patients with UC after interruption of antibiotic treatment. The differences observed between BI07-wt e BI07-res proteomic patterns, as well as the high frequency of silent mutations reported for resistant mutants of Bifidobacterium could be the consequences of an increased mutation rate, mechanism which may lead to persistence of resistant bacteria in the population. However, the in vivo disappearance of resistant mutants in absence of selective pressure, allows excluding the upsurge of compensatory mutations without loss of resistance. Furthermore, the proteomic characterization of the resistant phenotype suggests that rifaximin resistance is associated with a reduced bacterial fitness in B. lactis BI07-res, supporting the hypothesis of a biological cost of antibiotic resistance in Bifidobacterium. The hypothesis of rifaximin inactivation by bacterial enzymatic activities was verified by using liquid chromatography coupled with tandem mass spectrometry. Neither chemical modifications nor degradation derivatives of the rifaximin moiety were detected. The exclusion of a biodegradation pattern for the drug was further supported by the quantitative recovery in BI07-res culture fractions of the total rifaximin amount (100 μg/ml) added to the culture medium. To confirm the main role of the mutation on the β chain of RNA polymerase in rifaximin resistance acquisition, transcription activity of crude enzymatic extracts of BI07-res cells was evaluated. Although the inhibition effects of rifaximin on in vitro transcription were definitely higher for BI07-wt than for BI07-res, a partial resistance of the mutated RNA polymerase at rifaximin concentrations > 10 μg/ml was supposed, on the basis of the calculated differences in inhibition percentages between BI07-wt and BI07-res. By considering the resistance of entire BI07-res cells to rifaximin concentrations > 100 μg/ml, supplementary resistance mechanisms may take place in vivo. A barrier for the rifaximin uptake in BI07-res cells was suggested in this study, on the basis of the major portion of the antibiotic found to be bound to the cellular pellet respect to the portion recovered in the cellular lysate. Related to this finding, a resistance mechanism involving changes of membrane permeability was supposed. A previous study supports this hypothesis, demonstrating the involvement of surface properties and permeability in natural resistance to rifampicin in mycobacteria, isolated from cases of human infection, which possessed a rifampicin-susceptible RNA polymerase. To understand the mechanism of membrane barrier, variations in percentage of saturated and unsaturated FAs and their methylation products in BI07-wt and BI07-res membranes were investigated. While saturated FAs confer rigidity to membrane and resistance to stress agents, such as antibiotics, a high level of lipid unsaturation is associated with high fluidity and susceptibility to stresses. Thus, the higher percentage of saturated FAs during the stationary phase of BI07-res could represent a defence mechanism of mutant cells to prevent the antibiotic uptake. Furthermore, the increase of CFAs such as dihydrosterculic acid during the stationary phase of BI07-res suggests that this CFA could be more suitable than its isomer lactobacillic acid to interact with and prevent the penetration of exogenous molecules including rifaximin. Finally, the impact of rifaximin on immune regulatory functions of the gut was evaluated. It has been suggested a potential anti-inflammatory effect of rifaximin, with reduced secretion of IFN-γ in a rodent model of colitis. Analogously, it has been reported a significant decrease in IL-8, MCP-1, MCP-3 e IL-10 levels in patients affected by pouchitis, treated with a combined therapy of rifaximin and ciprofloxacin. Since rifaximin enables in vivo and in vitro selection of Bifidobacterium resistant mutants with high frequency, the immunomodulation activities of rifaximin associated with a B. lactis resistant mutant were also taken into account. Data obtained from PBMC stimulation experiments suggest the following conclusions: (i) rifaximin does not exert any effect on production of IL-1β, IL-6 and IL-10, whereas it weakly stimulates production of TNF-α; (ii) B. lactis appears as a good inducer of IL-1β, IL-6 and TNF-α; (iii) combination of BI07-res and rifaximin exhibits a lower stimulation effect than BI07-res alone, especially for IL-6. These results confirm the potential anti-inflammatory effect of rifaximin, and are in agreement with several studies that report a transient pro-inflammatory response associated with probiotic administration. The understanding of the molecular factors determining rifaximin resistance in the genus Bifidobacterium assumes an applicative significance at pharmaceutical and medical level, as it represents the scientific basis to justify the simultaneous use of the antibiotic rifaximin and probiotic bifidobacteria in the clinical treatment of intestinal disorders.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last decade, the reverse vaccinology approach shifted the paradigm of vaccine discovery from conventional culture-based methods to high-throughput genome-based approaches for the development of recombinant protein-based vaccines against pathogenic bacteria. Besides reaching its main goal of identifying new vaccine candidates, this new procedure produced also a huge amount of molecular knowledge related to them. In the present work, we explored this knowledge in a species-independent way and we performed a systematic in silico molecular analysis of more than 100 protective antigens, looking at their sequence similarity, domain composition and protein architecture in order to identify possible common molecular features. This meta-analysis revealed that, beside a low sequence similarity, most of the known bacterial protective antigens shared structural/functional Pfam domains as well as specific protein architectures. Based on this, we formulated the hypothesis that the occurrence of these molecular signatures can be predictive of possible protective properties of other proteins in different bacterial species. We tested this hypothesis in Streptococcus agalactiae and identified four new protective antigens. Moreover, in order to provide a second proof of the concept for our approach, we used Staphyloccus aureus as a second pathogen and identified five new protective antigens. This new knowledge-driven selection process, named MetaVaccinology, represents the first in silico vaccine discovery tool based on conserved and predictive molecular and structural features of bacterial protective antigens and not dependent upon the prediction of their sub-cellular localization.