36 resultados para CRUCIAL ROLE
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
One of the ways by which the legal system has responded to different sets of problems is the blurring of the traditional boundaries of criminal law, both procedural and substantive. This study aims to explore under what conditions does this trend lead to the improvement of society's welfare by focusing on two distinguishing sanctions in criminal law, incarceration and social stigma. In analyzing how incarceration affects the incentive to an individual to violate a legal standard, we considered the crucial role of the time constraint. This aspect has not been fully explored in the literature on law and economics, especially with respect to the analysis of the beneficiality of imposing either a fine or a prison term. We observed that that when individuals are heterogeneous with respect to wealth and wage income, and when the level of activity can be considered a normal good, only the middle wage and middle income groups can be adequately deterred by a fixed fines alone regime. The existing literature only considers the case of the very poor, deemed as judgment proof. However, since imprisonment is a socially costly way to deprive individuals of their time, other alternatives may be sought such as the imposition of discriminatory monetary fine, partial incapacitation and other alternative sanctions. According to traditional legal theory, the reason why criminal law is obeyed is not mainly due to the monetary sanctions but to the stigma arising from the community’s moral condemnation that accompanies conviction or merely suspicion. However, it is not sufficiently clear whether social stigma always accompanies a criminal conviction. We addressed this issue by identifying the circumstances wherein a criminal conviction carries an additional social stigma. Our results show that social stigma is seen to accompany a conviction under the following conditions: first, when the law coincides with the society's social norms; and second, when the prohibited act provides information on an unobservable attribute or trait of an individual -- crucial in establishing or maintaining social relationships beyond mere economic relationships. Thus, even if the social planner does not impose the social sanction directly, the impact of social stigma can still be influenced by the probability of conviction and the level of the monetary fine imposed as well as the varying degree of correlation between the legal standard violated and the social traits or attributes of the individual. In this respect, criminal law serves as an institution that facilitates cognitive efficiency in the process of imposing the social sanction to the extent that the rest of society is boundedly rational and use judgment heuristics. Paradoxically, using criminal law in order to invoke stigma for the violation of a legal standard may also serve to undermine its strength. To sum, the results of our analysis reveal that the scope of criminal law is narrow both for the purposes of deterrence and cognitive efficiency. While there are certain conditions where the enforcement of criminal law may lead to an increase in social welfare, particularly with respect to incarceration and stigma, we have also identified the channels through which they could affect behavior. Since such mechanisms can be replicated in less costly ways, society should first try or seek to employ these legal institutions before turning to criminal law as a last resort.
Resumo:
In the first part of my thesis I studied the mechanism of initiation of the innate response to HSV-1. Innate immune response is the first line of defense set up by the cell to counteract pathogens infection and it is elicited by the activation of a number of membrane or intracellular receptors and sensors, collectively indicated as PRRs, Patter Recognition Receptors. We reported that the HSV pathogen-associated molecular patterns (PAMP) that activate Toll-like receptor 2 (TLR2) and lead to the initiation of innate response are the virion glycoproteins gH/gL and gB, which constitute the conserved fusion core apparatus across the Herpesvirus. Specifically gH/gL is sufficient to initiate a signaling cascade which leads to NF-κB activation. Then, by gain and loss-of-function approaches, we found that αvβ3-integrin is a sensor of and plays a crucial role in the innate defense against HSV-1. We showed that αvβ3-integrin signals through a pathway that concurs with TLR2, affects activation/induction of interferons type 1, NF-κB, and a polarized set of cytokines and receptors. Thus, we demonstrated that gH/gL is sufficient to induce IFN1 and NF-κB via this pathway. From these data, we proposed that αvβ3-integrin is considered a class of non-TLR pattern recognition receptors. In the second part of my thesis I studied the capacity of human mesenchymal stromal cells isolated by fetal membranes (FM-hMSCs) to be used as carrier cells for the delivery of retargeted R-LM249 virus. The use of systemically administrated carrier cells to deliver oncolytic viruses to tumoral targets is a promising strategy in oncolytic virotherapy. We observed that FM-hMSCs can be infected by R-LM249 and we optimized the infection condition; then we demonstrate that stromal cells sustain the replication of retargeted R-LM249 and spread it to target tumoral cells. From these preliminary data FM-hMSCs resulted suitable to be used as carrier cells
Resumo:
Childhood neuroblastoma is the most common solid tumour of infancy and highly refractory to therapy. One of the most powerful prognostic indicators for this disease is the N-Myc gene amplification, which occurs in approximately 25% of all neuroblastomas. N-Myc is a member of transcription factors belonging to a subclass of the larger group of proteins sharing Basic-Region/Helix–Loop–Helix/Leucin-Zipper (BR/HLH/LZ) motif. N-Myc oncoproteins may determine activation or repression of several genes thanks to different protein-protein interactions that may modulate its transcriptional regulatory ability and therefore its potential for oncogenicity. Chromatin modifications, including histone methylation, have a crucial role in transcription de-regulation of many cancer-related genes. Here, it was investigated whether N-Myc can functionally and/or physically interact with two different factors involved in methyl histone modification: WDR5 (core member of the MLL/Set1 methyltransferase complex) and the de- methylase LSD1. Co-IP assays have demonstrated the presence of both N-Myc-WDR5 and N-Myc-LSD1 complexes in two neuroblastoma cell lines. Human N-Myc amplified cell lines were used as a model system to investigate on transcription activation and/or repression mechanisms carried out by N-Myc-LSD1 and N-Myc-WDR5 protein complexes. qRT-PCR and immunoblot assays underlined the ability of both complexes to positively (N-Myc-WDR5) and negatively (N-Myc-LSD1) influence transcriptional regulation of crititical neuroblastoma N-Myc-related genes, MDM2, p21 and Clusterin. Ch-IP experiments have revealed the binding of the N-Myc complexes above mentioned to the gene promoters analysed. Finally, pharmacological treatment pointed to abolish N-Myc and LSD1 activity were performed to test cellular alterations, such as cell viability and cell cycle progression. Overall, the results presented in this work suggest that N-Myc can interact with two distinct histone methyl modifiers to positively and negatively affect gene transcription in neuroblastoma.
Resumo:
Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
Context-aware computing is currently considered the most promising approach to overcome information overload and to speed up access to relevant information and services. Context-awareness may be derived from many sources, including user profile and preferences, network information, sensor analysis; usually context-awareness relies on the ability of computing devices to interact with the physical world, i.e. with the natural and artificial objects hosted within the "environment”. Ideally, context-aware applications should not be intrusive and should be able to react according to user’s context, with minimum user effort. Context is an application dependent multidimensional space and the location is an important part of it since the very beginning. Location can be used to guide applications, in providing information or functions that are most appropriate for a specific position. Hence location systems play a crucial role. There are several technologies and systems for computing location to a vary degree of accuracy and tailored for specific space model, i.e. indoors or outdoors, structured spaces or unstructured spaces. The research challenge faced by this thesis is related to pedestrian positioning in heterogeneous environments. Particularly, the focus will be on pedestrian identification, localization, orientation and activity recognition. This research was mainly carried out within the “mobile and ambient systems” workgroup of EPOCH, a 6FP NoE on the application of ICT to Cultural Heritage. Therefore applications in Cultural Heritage sites were the main target of the context-aware services discussed. Cultural Heritage sites are considered significant test-beds in Context-aware computing for many reasons. For example building a smart environment in museums or in protected sites is a challenging task, because localization and tracking are usually based on technologies that are difficult to hide or harmonize within the environment. Therefore it is expected that the experience made with this research may be useful also in domains other than Cultural Heritage. This work presents three different approaches to the pedestrian identification, positioning and tracking: Pedestrian navigation by means of a wearable inertial sensing platform assisted by the vision based tracking system for initial settings an real-time calibration; Pedestrian navigation by means of a wearable inertial sensing platform augmented with GPS measurements; Pedestrian identification and tracking, combining the vision based tracking system with WiFi localization. The proposed localization systems have been mainly used to enhance Cultural Heritage applications in providing information and services depending on the user’s actual context, in particular depending on the user’s location.
Resumo:
This thesis is dedicated to the analysis of non-linear pricing in oligopoly. Non-linear pricing is a fairly predominant practice in most real markets, mostly characterized by some amount of competition. The sophistication of pricing practices has increased in the latest decades due to the technological advances that have allowed companies to gather more and more data on consumers preferences. The first essay of the thesis highlights the main characteristics of oligopolistic non-linear pricing. Non-linear pricing is a special case of price discrimination. The theory of price discrimination has to be modified in presence of oligopoly: in particular, a crucial role is played by the competitive externality that implies that product differentiation is closely related to the possibility of discriminating. The essay reviews the theory of competitive non-linear pricing by starting from its foundations, mechanism design under common agency. The different approaches to model non-linear pricing are then reviewed. In particular, the difference between price and quantity competition is highlighted. Finally, the close link between non-linear pricing and the recent developments in the theory of vertical differentiation is explored. The second essay shows how the effects of non-linear pricing are determined by the relationship between the demand and the technological structure of the market. The chapter focuses on a model in which firms supply a homogeneous product in two different sizes. Information about consumers' reservation prices is incomplete and the production technology is characterized by size economies. The model provides insights on the size of the products that one finds in the market. Four equilibrium regions are identified depending on the relative intensity of size economies with respect to consumers' evaluation of the good. Regions for which the product is supplied in a single unit or in several different sizes or in only a very large one. Both the private and social desirability of non-linear pricing varies across different equilibrium regions. The third essay considers the broadband internet market. Non discriminatory issues seem the core of the recent debate on the opportunity or not of regulating the internet. One of the main questions posed is whether the telecom companies, owning the networks constituting the internet, should be allowed to offer quality-contingent contracts to content providers. The aim of this essay is to analyze the issue through a stylized two-sided market model of the web that highlights the effects of such a discrimination over quality, prices and participation to the internet of providers and final users. An overall welfare comparison is proposed, concluding that the final effects of regulation crucially depend on both the technology and preferences of agents.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
In this work I address the study of language comprehension in an “embodied” framework. Firstly I show behavioral evidence supporting the idea that language modulates the motor system in a specific way, both at a proximal level (sensibility to the effectors) and at the distal level (sensibility to the goal of the action in which the single motor acts are inserted). I will present two studies in which the method is basically the same: we manipulated the linguistic stimuli (the kind of sentence: hand action vs. foot action vs. mouth action) and the effector by which participants had to respond (hand vs. foot vs. mouth; dominant hand vs. non-dominant hand). Response times analyses showed a specific modulation depending on the kind of sentence: participants were facilitated in the task execution (sentence sensibility judgment) when the effector they had to use to respond was the same to which the sentences referred. Namely, during language comprehension a pre-activation of the motor system seems to take place. This activation is analogous (even if less intense) to the one detectable when we practically execute the action described by the sentence. Beyond this effector specific modulation, we also found an effect of the goal suggested by the sentence. That is, the hand effector was pre-activated not only by hand-action-related sentences, but also by sentences describing mouth actions, consistently with the fact that to execute an action on an object with the mouth we firstly have to bring it to the mouth with the hand. After reviewing the evidence on simulation specificity directly referring to the body (for instance, the kind of the effector activated by the language), I focus on the specific properties of the object to which the words refer, particularly on the weight. In this case the hypothesis to test was if both lifting movement perception and lifting movement execution are modulated by language comprehension. We used behavioral and kinematics methods, and we manipulated the linguistic stimuli (the kind of sentence: the lifting of heavy objects vs. the lifting of light objects). To study the movement perception we measured the correlations between the weight of the objects lifted by an actor (heavy objects vs. light objects) and the esteems provided by the participants. To study the movement execution we measured kinematics parameters variance (velocity, acceleration, time to the first peak of velocity) during the actual lifting of objects (heavy objects vs. light objects). Both kinds of measures revealed that language had a specific effect on the motor system, both at a perceptive and at a motoric level. Finally, I address the issue of the abstract words. Different studies in the “embodied” framework tried to explain the meaning of abstract words The limit of these works is that they account only for subsets of phenomena, so results are difficult to generalize. We tried to circumvent this problem by contrasting transitive verbs (abstract and concrete) and nouns (abstract and concrete) in different combinations. The behavioral study was conducted both with German and Italian participants, as the two languages are syntactically different. We found that response times were faster for both the compatible pairs (concrete verb + concrete noun; abstract verb + abstract noun) than for the mixed ones. Interestingly, for the mixed combinations analyses showed a modulation due to the specific language (German vs. Italian): when the concrete word precedes the abstract one responses were faster, regardless of the word grammatical class. Results are discussed in the framework of current views on abstract words. They highlight the important role of developmental and social aspects of language use, and confirm theories assigning a crucial role to both sensorimotor and linguistic experience for abstract words.
Resumo:
The Ctr family is an essential part of the copper homeostasis machinery and its members share sequence homology and structural and functional features. Higher eukaryotes express two members of this family Ctr1 and Ctr2. Numerous structural and functional studies are available for Ctr1, the only high affinity Cu(I) transporter thus far identified. Ctr1 holigotrimers mediate cellular copper uptake and this protein was demonstrated to be essential for embryonic development and to play a crucial role in dietary copper acquisition. Instead very little is known about Ctr2, it bears structural homology to the yeast vacuolar copper transporter, which mediates mobilization of vacuolar copper stores. Recent studies using over-expressed epitope-tagged forms of human Ctr2 suggested a function as a low affinity copper transporter that can mediate either copper uptake from the extracellular environment or mobilization of lysosomal copper stores. Using an antibody that recognizes endogenous mouse Ctr2, we studied the expression and localization of endogenous mouse Ctr2 in cell culture and in mouse models to understand its regulation and function in copper homeostasis. By immunoblot we observed a regulation of mCtr2 protein levels in a copper and Ctr1 dependent way. Our observations in cells and transgenic mice suggest that lack of Ctr1 induces a strong downregulation of Ctr2 probably by a post-translational mechanism. By indirect immunofluorescence we observed an exclusive intracellular localization in a perinuclear compartment and no co-localization with lysosomal markers. Immunofluorescence experiments in Ctr1 null cells, supported by sequence analysis, suggest that lysosomes may play a role in mCtr2 biology not as resident compartment, but as a degradation site. In appendix a LC-mass method for analysis of algal biotoxins belonging to the family of PsP (paralytic shellfish poisoning) is described.
Resumo:
The β-Amyloid (βA) peptide is the major component of senile plaques that are one of the hallmarks of Alzheimer’s Disease (AD). It is well recognized that Aβ exists in multiple assembly states, such as soluble oligomers or insoluble fibrils, which affect neuronal viability and may contribute to disease progression. In particular, common βA-neurotoxic mechanisms are Ca2+ dyshomeostasis, reactive oxygen species (ROS) formation, altered signaling, mitochondrial dysfunction and neuronal death such as necrosis and apoptosis. Recent study shows that the ubiquitin-proteasome pathway play a crucial role in the degradation of short-lived and regulatory proteins that are important in a variety of basic and pathological cellular processes including apoptosis. Guanosine (Guo) is a purine nucleoside present extracellularly in brain that shows a spectrum of biological activities, both under physiological and pathological conditions. Recently it has become recognized that both neurons and glia also release guanine-based purines. However, the role of Guo in AD is still not well established. In this study, we investigated the machanism basis of neuroprotective effects of GUO against Aβ peptide-induced toxicity in neuronal (SH-SY5Y), in terms of mitochondrial dysfunction and translocation of phosphatidylserine (PS), a marker of apoptosis, using MTT and Annexin-V assay, respectively. In particular, treatment of SH-SY5Y cells with GUO (12,5-75 μM) in presence of monomeric βA25-35 (neurotoxic core of Aβ), oligomeric and fibrillar βA1-42 peptides showed a strong dose-dependent inhibitory effects on βA-induced toxic events. The maximum inhibition of mitochondrial function loss and PS translocation was observed with 75 μM of Guo. Subsequently, to investigate whether neuroprotection of Guo can be ascribed to its ability to modulate proteasome activity levels, we used lactacystin, a specific inhibitor of proteasome. We found that the antiapoptotic effects of Guo were completely abolished by lactacystin. To rule out the possibility that this effects resulted from an increase in proteasome activity by Guo, the chymotrypsin-like activity was assessed employing the fluorogenic substrate Z-LLL-AMC. The treatment of SH-SY5Y with Guo (75 μM for 0-6 h) induced a strong increase, in a time-dependent manner, of proteasome activity. In parallel, no increase of ubiquitinated protein levels was observed at similar experimental conditions adopted. We then evaluated an involvement of anti and pro-apoptotic proteins such as Bcl-2, Bad and Bax by western blot analysis. Interestingly, Bax levels decreased after 2 h treatment of SH-SY5Y with Guo. Taken together, these results demonstrate that Guo neuroprotective effects against βA-induced apoptosis are mediated, at least partly, via proteasome activation. In particular, these findings suggest a novel neuroprotective pathway mediated by Guo, which involves a rapid degradation of pro-apoptotic proteins by the proteasome. In conclusion, the present data, raise the possibility that Guo could be used as an agent for the treatment of AD.
Resumo:
Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.
Resumo:
The construction and use of multimedia corpora has been advocated for a while in the literature as one of the expected future application fields of Corpus Linguistics. This research project represents a pioneering experience aimed at applying a data-driven methodology to the study of the field of AVT, similarly to what has been done in the last few decades in the macro-field of Translation Studies. This research was based on the experience of Forlixt 1, the Forlì Corpus of Screen Translation, developed at the University of Bologna’s Department of Interdisciplinary Studies in Translation, Languages and Culture. As a matter of fact, in order to quantify strategies of linguistic transfer of an AV product, we need to take into consideration not only the linguistic aspect of such a product but all the meaning-making resources deployed in the filmic text. Provided that one major benefit of Forlixt 1 is the combination of audiovisual and textual data, this corpus allows the user to access primary data for scientific investigation, and thus no longer rely on pre-processed material such as traditional annotated transcriptions. Based on this rationale, the first chapter of the thesis sets out to illustrate the state of the art of research in the disciplinary fields involved. The primary objective was to underline the main repercussions on multimedia texts resulting from the interaction of a double support, audio and video, and, accordingly, on procedures, means, and methods adopted in their translation. By drawing on previous research in semiotics and film studies, the relevant codes at work in visual and acoustic channels were outlined. Subsequently, we concentrated on the analysis of the verbal component and on the peculiar characteristics of filmic orality as opposed to spontaneous dialogic production. In the second part, an overview of the main AVT modalities was presented (dubbing, voice-over, interlinguistic and intra-linguistic subtitling, audio-description, etc.) in order to define the different technologies, processes and professional qualifications that this umbrella term presently includes. The second chapter focuses diachronically on various theories’ contribution to the application of Corpus Linguistics’ methods and tools to the field of Translation Studies (i.e. Descriptive Translation Studies, Polysystem Theory). In particular, we discussed how the use of corpora can favourably help reduce the gap existing between qualitative and quantitative approaches. Subsequently, we reviewed the tools traditionally employed by Corpus Linguistics in regard to the construction of traditional “written language” corpora, to assess whether and how they can be adapted to meet the needs of multimedia corpora. In particular, we reviewed existing speech and spoken corpora, as well as multimedia corpora specifically designed to investigate Translation. The third chapter reviews Forlixt 1's main developing steps, from a technical (IT design principles, data query functions) and methodological point of view, by laying down extensive scientific foundations for the annotation methods adopted, which presently encompass categories of pragmatic, sociolinguistic, linguacultural and semiotic nature. Finally, we described the main query tools (free search, guided search, advanced search and combined search) and the main intended uses of the database in a pedagogical perspective. The fourth chapter lists specific compilation criteria retained, as well as statistics of the two sub-corpora, by presenting data broken down by language pair (French-Italian and German-Italian) and genre (cinema’s comedies, television’s soapoperas and crime series). Next, we concentrated on the discussion of the results obtained from the analysis of summary tables reporting the frequency of categories applied to the French-Italian sub-corpus. The detailed observation of the distribution of categories identified in the original and dubbed corpus allowed us to empirically confirm some of the theories put forward in the literature and notably concerning the nature of the filmic text, the dubbing process and Italian dubbed language’s features. This was possible by looking into some of the most problematic aspects, like the rendering of socio-linguistic variation. The corpus equally allowed us to consider so far neglected aspects, such as pragmatic, prosodic, kinetic, facial, and semiotic elements, and their combination. At the end of this first exploration, some specific observations concerning possible macrotranslation trends were made for each type of sub-genre considered (cinematic and TV genre). On the grounds of this first quantitative investigation, the fifth chapter intended to further examine data, by applying ad hoc models of analysis. Given the virtually infinite number of combinations of categories adopted, and of the latter with searchable textual units, three possible qualitative and quantitative methods were designed, each of which was to concentrate on a particular translation dimension of the filmic text. The first one was the cultural dimension, which specifically focused on the rendering of selected cultural references and on the investigation of recurrent translation choices and strategies justified on the basis of the occurrence of specific clusters of categories. The second analysis was conducted on the linguistic dimension by exploring the occurrence of phrasal verbs in the Italian dubbed corpus and by ascertaining the influence on the adoption of related translation strategies of possible semiotic traits, such as gestures and facial expressions. Finally, the main aim of the third study was to verify whether, under which circumstances, and through which modality, graphic and iconic elements were translated into Italian from an original corpus of both German and French films. After having reviewed the main translation techniques at work, an exhaustive account of possible causes for their non-translation was equally provided. By way of conclusion, the discussion of results obtained from the distribution of annotation categories on the French-Italian corpus, as well as the application of specific models of analysis allowed us to underline possible advantages and drawbacks related to the adoption of a corpus-based approach to AVT studies. Even though possible updating and improvement were proposed in order to help solve some of the problems identified, it is argued that the added value of Forlixt 1 lies ultimately in having created a valuable instrument, allowing to carry out empirically-sound contrastive studies that may be usefully replicated on different language pairs and several types of multimedia texts. Furthermore, multimedia corpora can also play a crucial role in L2 and translation teaching, two disciplines in which their use still lacks systematic investigation.
Resumo:
Staphylococcus aureus and Staphylococcus epidermidis are leading pathogens of implant-related infections. This study aimed at investigating the diverse distribution of different bacterial pathogen factors in most prevalent S. aureus and S. epidermidis strain types causing orthopaedic implant infections. In this study the presence both of the ica genes, encoding for biofilm exopolysaccharide production, and the insertion sequence IS256, a mobile element frequently associated to transposons, was investigated in relationship with the prevalence of antibiotic resistance among Staphylococcus epidermidis strains. The investigation was conducted on 70 clinical isolates derived from orthopaedic implant infections. Among the clinical isolates investigated a dramatic high level of association was found between the presence of ica genes as well as of IS256 and multiple resistance to all the antibiotics tested. Noteworthy, a striking full association between the presence of IS256 and resistance to gentamicin was found, being none of the IS256-negative strain resistant to this antibiotic. This association is probably because of the link of the corresponding aminoglycoside-resistance genes, and IS256, often co-existing within the same staphylococcal transposon. Moreover we investigated the prevalence of aac(6’)-Ie-aph(2’’), aph (3’) IIIa, and ant(4’) genes, encoding for the three forms of aminoglycoside-modifying enzymes (AME), responsible for resistance to aminoglycoside antibiotics. All isolates were characterized by automated ribotyping, so that the presence of antibiotic resistance determinants was investigated in strains exhibiting different ribopatterns. Interestingly, combinations of coexisting AME genes appeared to be typical of specific ribopatterns. 200 S. aureus isolates, categorized into ribogroups by automated ribotyping, i.e. rDNA restriction fragment length polymorphism analysis, were screened for the presence of a panel of adhesins genes, accessory gene regulatory (agr) polymorphisms and toxins. For many ribogroups, characteristic tandem genes arrangements could be identified. Surprisingly, the isolates of the most prevalent cluster, enlisting 27 isolates, were susceptible to almost all antibiotics and never possessed the lukD/lukE gene, thus suggesting the role of factors other than antibiotic resistance and the here investigated toxins in driving the major epidemic clone to the larger success. Afterwards, .in the predominant S. aureus cluster, the bbp gene encoding bone sialoprotein-binding protein appeared a typical virulence trait, found in 93% of the isolates. Conversely, the bbp gene was identified in just 10% of the remaining isolates of the collection. In this cluster, co-presence of bbp with the cna gene encoding collagen adhesin was a pattern consistently observed. These findings indicate a crucial role of both these adhesins, able to bind the most abundant bone proteins, in the pathogenesis of orthopaedic implant infections, there where biomaterials interface bone tissues. Moreover a PCR screening for the ebpS gene, conducted on over two hundred S. aureus clinical isolates from implant related infections revealed the detection of six strains exhibiting an altered amplicon size, shorter than expected. In order to elucidate the sequence changes present in these gene variants, the trait comprised between the primers was analyzed in all six isolates bearing the modification and in four isolates exhibiting the regular amplicon size. From nucleotide translation, the corresponding encoded protein was found to lack an entire peptide segment of 60 amino acids. These variants, missing an entire hydrophobic region, could actually facilitate current structural studies, helping to assess whether the absent domain is strictly necessary for a functional adhesin conformation and its contribution to the topology of the protein. This study suggests that epidemic clones appear to pursue different survival strategies, where adhesins, when present, exhibit diverse importance as virulence factors. A practical message arising from the present study is that strategies for the prevention and treatment of implant orthopaedic infections should target adhesins conjointly present in epidemic clones. Furthermore, the choice of reference strains for testing the anti-infective properties of biomaterials should focus on a selection of the most prevalent clones as they exhibit distinct profiles of adhesins.
Resumo:
Organic electronics has grown enormously during the last decades driven by the encouraging results and the potentiality of these materials for allowing innovative applications, such as flexible-large-area displays, low-cost printable circuits, plastic solar cells and lab-on-a-chip devices. Moreover, their possible field of applications reaches from medicine, biotechnology, process control and environmental monitoring to defense and security requirements. However, a large number of questions regarding the mechanism of device operation remain unanswered. Along the most significant is the charge carrier transport in organic semiconductors, which is not yet well understood. Other example is the correlation between the morphology and the electrical response. Even if it is recognized that growth mode plays a crucial role into the performance of devices, it has not been exhaustively investigated. The main goal of this thesis was the finding of a correlation between growth modes, electrical properties and morphology in organic thin-film transistors (OTFTs). In order to study the thickness dependence of electrical performance in organic ultra-thin-film transistors, we have designed and developed a home-built experimental setup for performing real-time electrical monitoring and post-growth in situ electrical characterization techniques. We have grown pentacene TFTs under high vacuum conditions, varying systematically the deposition rate at a fixed room temperature. The drain source current IDS and the gate source current IGS were monitored in real-time; while a complete post-growth in situ electrical characterization was carried out. At the end, an ex situ morphological investigation was performed by using the atomic force microscope (AFM). In this work, we present the correlation for pentacene TFTs between growth conditions, Debye length and morphology (through the correlation length parameter). We have demonstrated that there is a layered charge carriers distribution, which is strongly dependent of the growth mode (i.e. rate deposition for a fixed temperature), leading to a variation of the conduction channel from 2 to 7 monolayers (MLs). We conciliate earlier reported results that were apparently contradictory. Our results made evident the necessity of reconsidering the concept of Debye length in a layered low-dimensional device. Additionally, we introduce by the first time a breakthrough technique. This technique makes evident the percolation of the first MLs on pentacene TFTs by monitoring the IGS in real-time, correlating morphological phenomena with the device electrical response. The present thesis is organized in the following five chapters. Chapter 1 makes an introduction to the organic electronics, illustrating the operation principle of TFTs. Chapter 2 presents the organic growth from theoretical and experimental points of view. The second part of this chapter presents the electrical characterization of OTFTs and the typical performance of pentacene devices is shown. In addition, we introduce a correcting technique for the reconstruction of measurements hampered by leakage current. In chapter 3, we describe in details the design and operation of our innovative home-built experimental setup for performing real-time and in situ electrical measurements. Some preliminary results and the breakthrough technique for correlating morphological and electrical changes are presented. Chapter 4 meets the most important results obtained in real-time and in situ conditions, which correlate growth conditions, electrical properties and morphology of pentacene TFTs. In chapter 5 we describe applicative experiments where the electrical performance of pentacene TFTs has been investigated in ambient conditions, in contact to water or aqueous solutions and, finally, in the detection of DNA concentration as label-free sensor, within the biosensing framework.