23 resultados para elettrica mobilità veicolare semantic java
Resumo:
This work is concerned with the increasing relationships between two distinct multidisciplinary research fields, Semantic Web technologies and scholarly publishing, that in this context converge into one precise research topic: Semantic Publishing. In the spirit of the original aim of Semantic Publishing, i.e. the improvement of scientific communication by means of semantic technologies, this thesis proposes theories, formalisms and applications for opening up semantic publishing to an effective interaction between scholarly documents (e.g., journal articles) and their related semantic and formal descriptions. In fact, the main aim of this work is to increase the users' comprehension of documents and to allow document enrichment, discovery and linkage to document-related resources and contexts, such as other articles and raw scientific data. In order to achieve these goals, this thesis investigates and proposes solutions for three of the main issues that semantic publishing promises to address, namely: the need of tools for linking document text to a formal representation of its meaning, the lack of complete metadata schemas for describing documents according to the publishing vocabulary, and absence of effective user interfaces for easily acting on semantic publishing models and theories.
Resumo:
Many industries and academic institutions share the vision that an appropriate use of information originated from the environment may add value to services in multiple domains and may help humans in dealing with the growing information overload which often seems to jeopardize our life. It is also clear that information sharing and mutual understanding between software agents may impact complex processes where many actors (humans and machines) are involved, leading to relevant socioeconomic benefits. Starting from these two input, architectural and technological solutions to enable “environment-related cooperative digital services” are here explored. The proposed analysis starts from the consideration that our environment is physical space and here diversity is a major value. On the other side diversity is detrimental to common technological solutions, and it is an obstacle to mutual understanding. An appropriate environment abstraction and a shared information model are needed to provide the required levels of interoperability in our heterogeneous habitat. This thesis reviews several approaches to support environment related applications and intends to demonstrate that smart-space-based, ontology-driven, information-sharing platforms may become a flexible and powerful solution to support interoperable services in virtually any domain and even in cross-domain scenarios. It also shows that semantic technologies can be fruitfully applied not only to represent application domain knowledge. For example semantic modeling of Human-Computer Interaction may support interaction interoperability and transformation of interaction primitives into actions, and the thesis shows how smart-space-based platforms driven by an interaction ontology may enable natural ad flexible ways of accessing resources and services, e.g, with gestures. An ontology for computational flow execution has also been built to represent abstract computation, with the goal of exploring new ways of scheduling computation flows with smart-space-based semantic platforms.
Resumo:
In una situazione caratterizzata dalla scarsità delle risorse finanziare a disposizione degli enti locali, che rende necessario il contributo dei privati alla realizzazione delle opere pubbliche, e dalla scarsità delle risorse ambientali, che impone di perseguire la sostenibilità degli interventi, la tesi si pone l’obiettivo di rendere le realizzazioni di nuove infrastrutture viarie “attive” rispetto al contesto in cui si collocano, garantendo l’impegno di tutte parti coinvolte. Si tratta di ottenere il contributo dei privati oltre che per le opere di urbanizzazione primaria, funzionali all’insediamento stesso, anche per la realizzazione di infrastrutture viarie non esclusivamente dedicate a questo, ma che sono necessarie per garantirne la sostenibilità. Tale principio, che viene anche denominato “contributo di sostenibilità”, comincia oggi a trovare un’applicazione nelle pratiche urbanistiche, sconta ancora alcune criticità, in quanto i casi sviluppati si basano spesso su considerazioni che si prestano a contenziosi tra operatori privati e pubblica amministrazione. Ponendosi come obiettivo la definizione di una metodologia di supporto alla negoziazione per la determinazione univoca e oggettiva del contributo da chiedere agli attuatori delle trasformazioni per la realizzazione di nuove infrastrutture viarie, ci si è concentrati sullo sviluppo di un metodo operativo basato sull’adozione dei modelli di simulazione del traffico a 4 stadi. La metodologia proposta è stata verificata attraverso l’applicazione ad un caso di studio, che riguarda la realizzazione di un nuovo asse viario al confine tra i comuni di Castel Maggiore ed Argelato. L’asse, indispensabile per garantire l’accessibilità alle nuove aree di trasformazione che interessano quel quadrante, permette anche di risolvere alcune criticità viabilistiche attualmente presenti. Il tema affrontato quindi è quello della determinazione del contributo che ciascuno degli utilizzatori del nuovo asse dovrà versare al fine di consentirne la realizzazione. In conclusione, si formulano alcune considerazioni sull’utilità della metodologia proposta e sulla sua applicabilità a casi analoghi.
Resumo:
The aim of the thesis is to investigate the topic of semantic under-determinacy, i.e. the failure of the semantic content of certain expressions to determine a truth-evaluable utterance content. In the first part of the thesis, I engage with the problem of setting apart semantic under-determinacy as opposed to other phenomena such as ambiguity, vagueness, indexicality. As I will argue, the feature that distinguishes semantic under-determinacy from these phenomena is its being explainable solely in terms of under-articulation. In the second part of the thesis, I discuss the topic of how communication is possible, despite the semantic under-determinacy of language. I discuss a number of answers that have been offered: (i) the Radical Contextualist explanation which emphasises the role of pragmatic processes in utterance comprehension; (ii) the Indexicalist explanation in terms of hidden syntactic positions; (iii) the Relativist account, which regards sentences as true or false relative to extra coordinates in the circumstances of evaluation (besides possible worlds). In the final chapter, I propose an account of the comprehension of utterances of semantically under-determined sentences in terms of conceptual constraints, i.e. ways of organising information which regulate thought and discourse on certain matters. Conceptual constraints help the hearer to work out the truth-conditions of an utterance of a semantically under-determined sentence. Their role is clearly semantic, in that they contribute to “what is said” (rather than to “what is implied”); however, they do not respond to any syntactic constraint. The view I propose therefore differs, on the one hand, from Radical Contextualism, because it stresses the role of semantic-governed processes as opposed to pragmatics-governed processes; on the other hand, it differs from Indexicalism in its not endorsing any commitment as to hidden syntactic positions; and it differs from Relativism in that it maintains a monadic notion if truth.
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
The research aims at developing a framework for semantic-based digital survey of architectural heritage. Rooted in knowledge-based modeling which extracts mathematical constraints of geometry from architectural treatises, as-built information of architecture obtained from image-based modeling is integrated with the ideal model in BIM platform. The knowledge-based modeling transforms the geometry and parametric relation of architectural components from 2D printings to 3D digital models, and create large amount variations based on shape grammar in real time thanks to parametric modeling. It also provides prior knowledge for semantically segmenting unorganized survey data. The emergence of SfM (Structure from Motion) provides access to reconstruct large complex architectural scenes with high flexibility, low cost and full automation, but low reliability of metric accuracy. We solve this problem by combing photogrammetric approaches which consists of camera configuration, image enhancement, and bundle adjustment, etc. Experiments show the accuracy of image-based modeling following our workflow is comparable to that from range-based modeling. We also demonstrate positive results of our optimized approach in digital reconstruction of portico where low-texture-vault and dramatical transition of illumination bring huge difficulties in the workflow without optimization. Once the as-built model is obtained, it is integrated with the ideal model in BIM platform which allows multiple data enrichment. In spite of its promising prospect in AEC industry, BIM is developed with limited consideration of reverse-engineering from survey data. Besides representing the architectural heritage in parallel ways (ideal model and as-built model) and comparing their difference, we concern how to create as-built model in BIM software which is still an open area to be addressed. The research is supposed to be fundamental for research of architectural history, documentation and conservation of architectural heritage, and renovation of existing buildings.
Resumo:
Principale obiettivo della ricerca è quello di ricostruire lo stato dell’arte in materia di sanità elettronica e Fascicolo Sanitario Elettronico, con una precipua attenzione ai temi della protezione dei dati personali e dell’interoperabilità. A tal fine sono stati esaminati i documenti, vincolanti e non, dell’Unione europea nonché selezionati progetti europei e nazionali (come “Smart Open Services for European Patients” (EU); “Elektronische Gesundheitsakte” (Austria); “MedCom” (Danimarca); “Infrastruttura tecnologica del Fascicolo Sanitario Elettronico”, “OpenInFSE: Realizzazione di un’infrastruttura operativa a supporto dell’interoperabilità delle soluzioni territoriali di fascicolo sanitario elettronico nel contesto del sistema pubblico di connettività”, “Evoluzione e interoperabilità tecnologica del Fascicolo Sanitario Elettronico”, “IPSE - Sperimentazione di un sistema per l’interoperabilità europea e nazionale delle soluzioni di Fascicolo Sanitario Elettronico: componenti Patient Summary e ePrescription” (Italia)). Le analisi giuridiche e tecniche mostrano il bisogno urgente di definire modelli che incoraggino l’utilizzo di dati sanitari ed implementino strategie effettive per l’utilizzo con finalità secondarie di dati sanitari digitali , come Open Data e Linked Open Data. L’armonizzazione giuridica e tecnologica è vista come aspetto strategico per ridurre i conflitti in materia di protezione di dati personali esistenti nei Paesi membri nonché la mancanza di interoperabilità tra i sistemi informativi europei sui Fascicoli Sanitari Elettronici. A questo scopo sono state individuate tre linee guida: (1) armonizzazione normativa, (2) armonizzazione delle regole, (3) armonizzazione del design dei sistemi informativi. I principi della Privacy by Design (“prottivi” e “win-win”), così come gli standard del Semantic Web, sono considerate chiavi risolutive per il suddetto cambiamento.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.