19 resultados para Semantic interference
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Despite new methods and combined strategies, conventional cancer chemotherapy still lacks specificity and induces drug resistance. Gene therapy can offer the potential to obtain the success in the clinical treatment of cancer and this can be achieved by replacing mutated tumour suppressor genes, inhibiting gene transcription, introducing new genes encoding for therapeutic products, or specifically silencing any given target gene. Concerning gene silencing, attention has recently shifted onto the RNA interference (RNAi) phenomenon. Gene silencing mediated by RNAi machinery is based on short RNA molecules, small interfering RNAs (siRNAs) and microRNAs (miRNAs), that are fully o partially homologous to the mRNA of the genes being silenced, respectively. On one hand, synthetic siRNAs appear as an important research tool to understand the function of a gene and the prospect of using siRNAs as potent and specific inhibitors of any target gene provides a new therapeutical approach for many untreatable diseases, particularly cancer. On the other hand, the discovery of the gene regulatory pathways mediated by miRNAs, offered to the research community new important perspectives for the comprehension of the physiological and, above all, the pathological mechanisms underlying the gene regulation. Indeed, changes in miRNAs expression have been identified in several types of neoplasia and it has also been proposed that the overexpression of genes in cancer cells may be due to the disruption of a control network in which relevant miRNA are implicated. For these reasons, I focused my research on a possible link between RNAi and the enzyme cyclooxygenase-2 (COX-2) in the field of colorectal cancer (CRC), since it has been established that the transition adenoma-adenocarcinoma and the progression of CRC depend on aberrant constitutive expression of COX-2 gene. In fact, overexpressed COX-2 is involved in the block of apoptosis, the stimulation of tumor-angiogenesis and promotes cell invasion, tumour growth and metastatization. On the basis of data reported in the literature, the first aim of my research was to develop an innovative and effective tool, based on the RNAi mechanism, able to silence strongly and specifically COX-2 expression in human colorectal cancer cell lines. In this study, I firstly show that an siRNA sequence directed against COX-2 mRNA (siCOX-2), potently downregulated COX-2 gene expression in human umbilical vein endothelial cells (HUVEC) and inhibited PMA-induced angiogenesis in vitro in a specific, non-toxic manner. Moreover, I found that the insertion of a specific cassette carrying anti-COX-2 shRNA sequence (shCOX-2, the precursor of siCOX-2 previously tested) into a viral vector (pSUPER.retro) greatly increased silencing potency in a colon cancer cell line (HT-29) without activating any interferon response. Phenotypically, COX-2 deficient HT-29 cells showed a significant impairment of their in vitro malignant behaviour. Thus, results reported here indicate an easy-to-use, powerful and high selective virus-based method to knockdown COX-2 gene in a stable and long-lasting manner, in colon cancer cells. Furthermore, they open up the possibility of an in vivo application of this anti-COX-2 retroviral vector, as therapeutic agent for human cancers overexpressing COX-2. In order to improve the tumour selectivity, pSUPER.retro vector was modified for the shCOX-2 expression cassette. The aim was to obtain a strong, specific transcription of shCOX-2 followed by COX-2 silencing mediated by siCOX-2 only in cancer cells. For this reason, H1 promoter in basic pSUPER.retro vector [pS(H1)] was substituted with the human Cox-2 promoter [pS(COX2)] and with a promoter containing repeated copies of the TCF binding element (TBE) [pS(TBE)]. These promoters were choosen because they are partculary activated in colon cancer cells. COX-2 was effectively silenced in HT-29 and HCA-7 colon cancer cells by using enhanced pS(COX2) and pS(TBE) vectors. In particular, an higher siCOX-2 production followed by a stronger inhibition of Cox-2 gene were achieved by using pS(TBE) vector, that represents not only the most effective, but also the most specific system to downregulate COX-2 in colon cancer cells. Because of the many limits that a retroviral therapy could have in a possible in vivo treatment of CRC, the next goal was to render the enhanced RNAi-mediate COX-2 silencing more suitable for this kind of application. Xiang and et al. (2006) demonstrated that it is possible to induce RNAi in mammalian cells after infection with engineered E. Coli strains expressing Inv and HlyA genes, which encode for two bacterial factors needed for successful transfer of shRNA in mammalian cells. This system, called “trans-kingdom” RNAi (tkRNAi) could represent an optimal approach for the treatment of colorectal cancer, since E. Coli in normally resident in human intestinal flora and could easily vehicled to the tumor tissue. For this reason, I tested the improved COX-2 silencing mediated by pS(COX2) and pS(TBE) vectors by using tkRNAi system. Results obtained in HT-29 and HCA-7 cell lines were in high agreement with data previously collected after the transfection of pS(COX2) and pS(TBE) vectors in the same cell lines. These findings suggest that tkRNAi system for COX-2 silencing, in particular mediated by pS(TBE) vector, could represent a promising tool for the treatment of colorectal cancer. Flanking the studies addressed to the setting-up of a RNAi-mediated therapeutical strategy, I proposed to get ahead with the comprehension of new molecular basis of human colorectal cancer. In particular, it is known that components of the miRNA/RNAi pathway may be altered during the progressive development of colorectal cancer (CRC), and it has been already demonstrated that some miRNAs work as tumor suppressors or oncomiRs in colon cancer. Thus, my hypothesis was that overexpressed COX-2 protein in colon cancer could be the result of decreased levels of one or more tumor suppressor miRNAs. In this thesis, I clearly show an inverse correlation between COX-2 expression and the human miR- 101(1) levels in colon cancer cell lines, tissues and metastases. I also demonstrate that the in vitro modulating of miR-101(1) expression in colon cancer cell lines leads to significant variations in COX-2 expression, and this phenomenon is based on a direct interaction between miR-101(1) and COX-2 mRNA. Moreover, I started to investigate miR-101(1) regulation in the hypoxic environment since adaptation to hypoxia is critical for tumor cell growth and survival and it is known that COX-2 can be induced directly by hypoxia-inducible factor 1 (HIF-1). Surprisingly, I observed that COX-2 overexpression induced by hypoxia is always coupled to a significant decrease of miR-101(1) levels in colon cancer cell lines, suggesting that miR-101(1) regulation could be involved in the adaption of cancer cells to the hypoxic environment that strongly characterize CRC tissues.
Resumo:
The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
This work is concerned with the increasing relationships between two distinct multidisciplinary research fields, Semantic Web technologies and scholarly publishing, that in this context converge into one precise research topic: Semantic Publishing. In the spirit of the original aim of Semantic Publishing, i.e. the improvement of scientific communication by means of semantic technologies, this thesis proposes theories, formalisms and applications for opening up semantic publishing to an effective interaction between scholarly documents (e.g., journal articles) and their related semantic and formal descriptions. In fact, the main aim of this work is to increase the users' comprehension of documents and to allow document enrichment, discovery and linkage to document-related resources and contexts, such as other articles and raw scientific data. In order to achieve these goals, this thesis investigates and proposes solutions for three of the main issues that semantic publishing promises to address, namely: the need of tools for linking document text to a formal representation of its meaning, the lack of complete metadata schemas for describing documents according to the publishing vocabulary, and absence of effective user interfaces for easily acting on semantic publishing models and theories.
Resumo:
Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.
Resumo:
The aim of the thesis is to investigate the topic of semantic under-determinacy, i.e. the failure of the semantic content of certain expressions to determine a truth-evaluable utterance content. In the first part of the thesis, I engage with the problem of setting apart semantic under-determinacy as opposed to other phenomena such as ambiguity, vagueness, indexicality. As I will argue, the feature that distinguishes semantic under-determinacy from these phenomena is its being explainable solely in terms of under-articulation. In the second part of the thesis, I discuss the topic of how communication is possible, despite the semantic under-determinacy of language. I discuss a number of answers that have been offered: (i) the Radical Contextualist explanation which emphasises the role of pragmatic processes in utterance comprehension; (ii) the Indexicalist explanation in terms of hidden syntactic positions; (iii) the Relativist account, which regards sentences as true or false relative to extra coordinates in the circumstances of evaluation (besides possible worlds). In the final chapter, I propose an account of the comprehension of utterances of semantically under-determined sentences in terms of conceptual constraints, i.e. ways of organising information which regulate thought and discourse on certain matters. Conceptual constraints help the hearer to work out the truth-conditions of an utterance of a semantically under-determined sentence. Their role is clearly semantic, in that they contribute to “what is said” (rather than to “what is implied”); however, they do not respond to any syntactic constraint. The view I propose therefore differs, on the one hand, from Radical Contextualism, because it stresses the role of semantic-governed processes as opposed to pragmatics-governed processes; on the other hand, it differs from Indexicalism in its not endorsing any commitment as to hidden syntactic positions; and it differs from Relativism in that it maintains a monadic notion if truth.
Resumo:
This thesis collects the outcomes of a Ph.D. course in Telecommunications engineering and it is focused on enabling techniques for Spread Spectrum (SS) navigation and communication satellite systems. It provides innovations for both interference management and code synchronization techniques. These two aspects are critical for modern navigation and communication systems and constitute the common denominator of the work. The thesis is organized in two parts: the former deals with interference management. We have proposed a novel technique for the enhancement of the sensitivity level of an advanced interference detection and localization system operating in the Global Navigation Satellite System (GNSS) bands, which allows the identification of interfering signals received with power even lower than the GNSS signals. Moreover, we have introduced an effective cancellation technique for signals transmitted by jammers, exploiting their repetitive characteristics, which strongly reduces the interference level at the receiver. The second part, deals with code synchronization. More in detail, we have designed the code synchronization circuit for a Telemetry, Tracking and Control system operating during the Launch and Early Orbit Phase; the proposed solution allows to cope with the very large frequency uncertainty and dynamics characterizing this scenario, and performs the estimation of the code epoch, of the carrier frequency and of the carrier frequency variation rate. Furthermore, considering a generic pair of circuits performing code acquisition, we have proposed a comprehensive framework for the design and the analysis of the optimal cooperation procedure, which minimizes the time required to accomplish synchronization. The study results particularly interesting since it enables the reduction of the code acquisition time without increasing the computational complexity. Finally, considering a network of collaborating navigation receivers, we have proposed an innovative cooperative code acquisition scheme, which allows exploit the shared code epoch information between neighbor nodes, according to the Peer-to-Peer paradigm.
Resumo:
We have realized a Data Acquisition chain for the use and characterization of APSEL4D, a 32 x 128 Monolithic Active Pixel Sensor, developed as a prototype for frontier experiments in high energy particle physics. In particular a transition board was realized for the conversion between the chip and the FPGA voltage levels and for the signal quality enhancing. A Xilinx Spartan-3 FPGA was used for real time data processing, for the chip control and the communication with a Personal Computer through a 2.0 USB port. For this purpose a firmware code, developed in VHDL language, was written. Finally a Graphical User Interface for the online system monitoring, hit display and chip control, based on windows and widgets, was realized developing a C++ code and using Qt and Qwt dedicated libraries. APSEL4D and the full acquisition chain were characterized for the first time with the electron beam of the transmission electron microscope and with 55Fe and 90Sr radioactive sources. In addition, a beam test was performed at the T9 station of the CERN PS, where hadrons of momentum of 12 GeV/c are available. The very high time resolution of APSEL4D (up to 2.5 Mfps, but used at 6 kfps) was fundamental in realizing a single electron Young experiment using nanometric double slits obtained by a FIB technique. On high statistical samples, it was possible to observe the interference and diffractions of single isolated electrons traveling inside a transmission electron microscope. For the first time, the information on the distribution of the arrival time of the single electrons has been extracted.
Resumo:
The main areas of research of this thesis are Interference Management and Link-Level Power Efficiency for Satellite Communications. The thesis is divided in two parts. Part I tackles the problem of interference environments in satellite communications, and interference mitigation strategies, not just in terms of avoidance of the interferers, but also in terms of actually exploiting the interference present in the system as a useful signal. The analysis follows a top-down approach across different levels of investigation, starting from system level consideration on interference management, down to link-level aspects and to intra-receiver design. Interference Management techniques are proposed at all the levels of investigation, with interesting results. Part II is related to efficiency in the power domain, for instance in terms of required Input Back-off at the power amplifiers, which can be an issue for waveform based on linear modulations, due to their varying envelope. To cope with such aspects, an analysis is carried out to compare linear modulation with waveforms based on constant envelope modulations. It is shown that in some scenarios, constant envelope waveforms, even if at lower spectral efficiency, outperform linear modulation waveform in terms of energy efficiency.
Resumo:
The research aims at developing a framework for semantic-based digital survey of architectural heritage. Rooted in knowledge-based modeling which extracts mathematical constraints of geometry from architectural treatises, as-built information of architecture obtained from image-based modeling is integrated with the ideal model in BIM platform. The knowledge-based modeling transforms the geometry and parametric relation of architectural components from 2D printings to 3D digital models, and create large amount variations based on shape grammar in real time thanks to parametric modeling. It also provides prior knowledge for semantically segmenting unorganized survey data. The emergence of SfM (Structure from Motion) provides access to reconstruct large complex architectural scenes with high flexibility, low cost and full automation, but low reliability of metric accuracy. We solve this problem by combing photogrammetric approaches which consists of camera configuration, image enhancement, and bundle adjustment, etc. Experiments show the accuracy of image-based modeling following our workflow is comparable to that from range-based modeling. We also demonstrate positive results of our optimized approach in digital reconstruction of portico where low-texture-vault and dramatical transition of illumination bring huge difficulties in the workflow without optimization. Once the as-built model is obtained, it is integrated with the ideal model in BIM platform which allows multiple data enrichment. In spite of its promising prospect in AEC industry, BIM is developed with limited consideration of reverse-engineering from survey data. Besides representing the architectural heritage in parallel ways (ideal model and as-built model) and comparing their difference, we concern how to create as-built model in BIM software which is still an open area to be addressed. The research is supposed to be fundamental for research of architectural history, documentation and conservation of architectural heritage, and renovation of existing buildings.
Resumo:
Principale obiettivo della ricerca è quello di ricostruire lo stato dell’arte in materia di sanità elettronica e Fascicolo Sanitario Elettronico, con una precipua attenzione ai temi della protezione dei dati personali e dell’interoperabilità. A tal fine sono stati esaminati i documenti, vincolanti e non, dell’Unione europea nonché selezionati progetti europei e nazionali (come “Smart Open Services for European Patients” (EU); “Elektronische Gesundheitsakte” (Austria); “MedCom” (Danimarca); “Infrastruttura tecnologica del Fascicolo Sanitario Elettronico”, “OpenInFSE: Realizzazione di un’infrastruttura operativa a supporto dell’interoperabilità delle soluzioni territoriali di fascicolo sanitario elettronico nel contesto del sistema pubblico di connettività”, “Evoluzione e interoperabilità tecnologica del Fascicolo Sanitario Elettronico”, “IPSE - Sperimentazione di un sistema per l’interoperabilità europea e nazionale delle soluzioni di Fascicolo Sanitario Elettronico: componenti Patient Summary e ePrescription” (Italia)). Le analisi giuridiche e tecniche mostrano il bisogno urgente di definire modelli che incoraggino l’utilizzo di dati sanitari ed implementino strategie effettive per l’utilizzo con finalità secondarie di dati sanitari digitali , come Open Data e Linked Open Data. L’armonizzazione giuridica e tecnologica è vista come aspetto strategico per ridurre i conflitti in materia di protezione di dati personali esistenti nei Paesi membri nonché la mancanza di interoperabilità tra i sistemi informativi europei sui Fascicoli Sanitari Elettronici. A questo scopo sono state individuate tre linee guida: (1) armonizzazione normativa, (2) armonizzazione delle regole, (3) armonizzazione del design dei sistemi informativi. I principi della Privacy by Design (“prottivi” e “win-win”), così come gli standard del Semantic Web, sono considerate chiavi risolutive per il suddetto cambiamento.
Resumo:
This thesis collects the outcomes of a Ph.D. course in Telecommunications Engineering and it is focused on the study and design of possible techniques able to counteract interference signal in Global Navigation Satellite System (GNSS) systems. The subject is the jamming threat in navigation systems, that has become a very increasingly important topic in recent years, due to the wide diffusion of GNSS-based civil applications. Detection and mitigation techniques are developed in order to fight out jamming signals, tested in different scenarios and including sophisticated signals. The thesis is organized in two main parts, which deal with management of GNSS intentional counterfeit signals. The first part deals with the interference management, focusing on the intentional interfering signal. In particular, a technique for the detection and localization of the interfering signal level in the GNSS bands in frequency domain has been proposed. In addition, an effective mitigation technique which exploits the periodic characteristics of the common jamming signals reducing interfering effects at the receiver side has been introduced. Moreover, this technique has been also tested in a different and more complicated scenario resulting still effective in mitigation and cancellation of the interfering signal, without high complexity. The second part still deals with the problem of interference management, but regarding with more sophisticated signal. The attention is focused on the detection of spoofing signal, which is the most complex among the jamming signal types. Due to this highly difficulty in detect and mitigate this kind of signal, spoofing threat is considered the most dangerous. In this work, a possible techniques able to detect this sophisticated signal has been proposed, observing and exploiting jointly the outputs of several operational block measurements of the GNSS receiver operating chain.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.