913 resultados para wireless ad hoc network
Resumo:
User experience on watching live videos must be satisfactory even under the inuence of different network conditions and topology changes, such as happening in Flying Ad-Hoc Networks (FANETs). Routing services for video dissemination over FANETs must be able to adapt routing decisions at runtime to meet Quality of Experience (QoE) requirements. In this paper, we introduce an adaptive beaconless opportunistic routing protocol for video dissemination over FANETs with QoE support, by taking into account multiple types of context information, such as link quality, residual energy, buffer state, as well as geographic information and node mobility in a 3D space. The proposed protocol takes into account Bayesian networks to define weight vectors and Analytic Hierarchy Process (AHP) to adjust the degree of importance for the context information based on instantaneous values. It also includes a position prediction to monitor the distance between two nodes in order to detect possible route failure.
Resumo:
Animal tracking has been addressed by different initiatives over the last two decades. Most of them rely on satellite connectivity on every single node and lack of energy-saving strategies. This paper presents several new contributions on the tracking of dynamic heterogeneous asynchronous networks (primary nodes with GPS and secondary nodes with a kinetic generator) motivated by the animal tracking paradigm with random transmissions. A simple approach based on connectivity and coverage intersection is compared with more sophisticated algorithms based on ad-hoc implementations of distributed Kalman-based filters that integrate measurement information using Consensus principles in order to provide enhanced accuracy. Several simulations varying the coverage range, the random behavior of the kinetic generator (modeled as a Poisson Process) and the periodic activation of GPS are included. In addition, this study is enhanced with HW developments and implementations on commercial off-the-shelf equipment which show the feasibility for performing these proposals on real hardware.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and intelligently will include at least a module for POS tagging. The more an application needs to understand the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based <Subject, Predicate, Object> triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based <Subject, Predicate, Object> triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the <Unit, Attribute, Value> triple structure recommended for annotations in these works (which is isomorphic to the <Subject, Predicate, Object> triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTags annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTags scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTags (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTags (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTags annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTags annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTaggers schema, a concrete instance of OntoTags abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely Bitexts DataLexica (http://www.bitext.com/EN/datalexica.asp), LACELLs (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), Connexors FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTags underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTaggers configuration, a concrete instance of OntoTags abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELLs tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTaggers schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAGS UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. CONFIRMED by the development of: o OntoTags annotation scheme, o OntoTags annotation architecture, o OntoTaggers (XML, RDF, OWL) annotation schemas, o OntoTaggers configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. H.3 was CONFIRMED by means of the development of OntoTaggers ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexors FDG, Bitexts DataLexica and LACELLs tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). CONFIRMED by means of the development of OntoTaggers RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. REJECTED: OntoTaggers experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTags ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTags architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTags linguistic ontologies). On the one hand, OntoTags network of ontologies consists of The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; The OIO (OntoTags Integration Ontology), which Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO; Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. On the other hand, OntoTags ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: As far as morphosyntactic annotations are concerned, OntoTags ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; As for syntactic annotations, OntoTags ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; Regarding semantic annotations, OntoTags ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTags ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELLs tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTags annotation scheme and OntoTaggers annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).
Resumo:
We propose a method to measure real-valued time series irreversibility which combines two different tools: the horizontal visibility algorithm and the Kullback-Leibler divergence. This method maps a time series to a directed network according to a geometric criterion. The degree of irreversibility of the series is then estimated by the Kullback-Leibler divergence (i.e. the distinguishability) between the in and out degree distributions of the associated graph. The method is computationally efficient and does not require any ad hoc symbolization process. We find that the method correctly distinguishes between reversible and irreversible stationary time series, including analytical and numerical studies of its performance for: (i) reversible stochastic processes (uncorrelated and Gaussian linearly correlated), (ii) irreversible stochastic processes (a discrete flashing ratchet in an asymmetric potential), (iii) reversible (conservative) and irreversible (dissipative) chaotic maps, and (iv) dissipative chaotic maps in the presence of noise. Two alternative graph functionals, the degree and the degree-degree distributions, can be used as the Kullback-Leibler divergence argument. The former is simpler and more intuitive and can be used as a benchmark, but in the case of an irreversible process with null net current, the degree-degree distribution has to be considered to identify the irreversible nature of the series
Resumo:
We show a cluster based routing protocol in order to improve the convergence of the clusters and of the network it is proposed to use a backup cluster head. The use of a event discrete simulator is used for the implementation and the simulation of a hierarchical routing protocol called the Backup Cluster Head Protocol (BCHP). Finally it is shown that the BCHP protocol improves the convergence and availability of the network through a comparative analysis with the Ad Hoc On Demand Distance Vector (AODV)[1] routing protocol and Cluster Based Routing Protocol (CBRP)[2]
Resumo:
Digital services and communications in vehicular scenarios provide the essential assets to improve road transport in several ways like reducing accidents, improving traffic efficiency and optimizing the transport of goods and people. Vehicular communications typically rely on VANET (Vehicular Ad hoc Networks). In these networks vehicles communicate with each other without the need of infrastructure. VANET are mainly oriented to disseminate information to the vehicles in certain geographic area for time critical services like safety warnings but present very challenging requirements that have not been successfully fulfilled nowadays. Some of these challenges are; channel saturation due to simultaneous radio access of many vehicles, routing protocols in topologies that vary rapidly, minimum quality of service assurance and security mechanisms to efficiently detect and neutralize malicious attacks. Vehicular services can be classified in four important groups: Safety, Efficiency, Sustainability and Infotainment. The benefits of these services for the transport sector are clear but many technological and business challenges need to be faced before a real mass market deployment. Service delivery platforms are not prepared for fulfilling the needs of this complex environment with restrictive requirements due to the criticism of some services To overcome this situation, we propose a solution called VISIONS Vehicular communication Improvement: Solution based on IMS Operational Nodes and Services. VISIONS leverages on IMS subsystem and NGN enablers, and follows the CALM reference Architecture standardized by ISO. It also avoids the use of Road Side Units (RSUs), reducing complexity and high costs in terms of deployment and maintenance. We demonstrate the benefits in the following areas: 1. VANET networks efficiency. VISIONS provide a mechanism for the vehicles to access valuable information from IMS and its capabilities through a cellular channel. This efficiency improvement will occur in two relevant areas: a. Routing mechanisms. These protocols are responsible of carrying information from a vehicle to another (or a group of vehicles) using multihop mechanisms. We do not propose a new algorithm but the use of VANET topology information provided through our solution to enrich the performance of these protocols. b. Security. Many aspects of security (privacy, key, authentication, access control, revocation mechanisms, etc) are not resolved in vehicular communications. Our solution efficiently disseminates revocation information to neutralize malicious nodes in the VANET. 2. Service delivery platform. It is based on extended enablers, reference architectures, standard protocols and open APIs. By following this approach, we reduce costs and resources for service development, deployment and maintenance. To quantify these benefits in VANET networks, we provide an analytical model of the system and simulate our solution in realistic scenarios. The simulations results demonstrate how VISIONS improves the performance of relevant routing protocols and is more efficient neutralizing security attacks than the widely proposed solutions based on RSUs. Finally, we design an innovative Social Network service based in our platform, explaining how VISIONS facilitate the deployment and usage of complex capabilities. RESUMEN Los servicios digitales y comunicaciones en entornos vehiculares proporcionan herramientas esenciales para mejorar el transporte por carretera; reduciendo el nmero de accidentes, mejorando la eficiencia del trfico y optimizando el transporte de mercancas y personas. Las comunicaciones vehiculares generalmente estn basadas en redes VANET (Vehicular Ad hoc Networks). En dichas redes, los vehculos se comunican entre s sin necesidad de infraestructura. Las redes VANET estn principalmente orientadas a difundir informacin (por ejemplo advertencias de seguridad) a los vehculos en determinadas zonas geogrficas, pero presentan unos requisitos muy exigentes que no se han resuelto con xito hasta la fecha. Algunos de estos retos son; saturacin del canal de acceso de radio debido al acceso simultneo de mltiples vehculos, la eficiencia de protocolos de encaminamiento en topologas que varan rpidamente, la calidad de servicio (QoS) y los mecanismos de seguridad para detectar y neutralizar los ataques maliciosos de manera eficiente. Los servicios vehiculares pueden clasificarse en cuatro grupos: Seguridad, Eficiencia del trfico, Sostenibilidad, e Infotainment (informacin y entretenimiento). Los beneficios de estos servicios para el sector son claros, pero es necesario resolver muchos desafos tecnolgicos y de negocio antes de una implementacin real. Las actuales plataformas de despliegue de servicios no estn preparadas para satisfacer las necesidades de este complejo entorno con requisitos muy restrictivos debido a la criticidad de algunas aplicaciones. Con el objetivo de mejorar esta situacin, proponemos una solucin llamada VISIONS Vehicular communication Improvement: Solution based on IMS Operational Nodes and Services. VISIONS se basa en el subsistema IMS, las capacidades NGN y es compatible con la arquitectura de referencia CALM estandarizado por ISO para sistemas de transporte. Tambin evita el uso de elementos en las carreteras, conocidos como Road Side Units (RSU), reduciendo la complejidad y los altos costes de despliegue y mantenimiento. A lo largo de la tesis, demostramos los beneficios en las siguientes reas: 1. Eficiencia en redes VANET. VISIONS proporciona un mecanismo para que los vehculos accedan a informacin valiosa proporcionada por IMS y sus capacidades a travs de un canal de celular. Dicho mecanismo contribuye a la mejora de dos reas importantes: a. Mecanismos de encaminamiento. Estos protocolos son responsables de llevar informacin de un vehculo a otro (o a un grupo de vehculos) utilizando mltiples saltos. No proponemos un nuevo algoritmo de encaminamiento, sino el uso de informacin topolgica de la red VANET a travs de nuestra solucin para enriquecer el funcionamiento de los protocolos ms relevantes. b. Seguridad. Muchos aspectos de la seguridad (privacidad, gestin de claves, autenticacin, control de acceso, mecanismos de revocacin, etc) no estn resueltos en las comunicaciones vehiculares. Nuestra solucin difunde de manera eficiente la informacin de revocacin para neutralizar los nodos maliciosos en la red. 2. Plataforma de despliegue de servicios. Est basada en capacidades NGN, arquitecturas de referencia, protocolos estndar y APIs abiertos. Siguiendo este enfoque, reducimos costes y optimizamos procesos para el desarrollo, despliegue y mantenimiento de servicios vehiculares. Para cuantificar estos beneficios en las redes VANET, ofrecemos un modelo de analtico del sistema y simulamos nuestra solucin en escenarios realistas. Los resultados de las simulaciones muestran cmo VISIONS mejora el rendimiento de los protocolos de encaminamiento relevantes y neutraliza los ataques a la seguridad de forma ms eficientes que las soluciones basadas en RSU. Por ltimo, diseamos un innovador servicio de red social basado en nuestra plataforma, explicando cmo VISIONS facilita el despliegue y el uso de las capacidades NGN.
Resumo:
Los avances en el hardware permiten disponer de grandes volmenes de datos, surgiendo aplicaciones que deben suministrar informacin en tiempo cuasi-real, la monitorizacin de pacientes, ej., el seguimiento sanitario de las conducciones de agua, etc. Las necesidades de estas aplicaciones hacen emerger el modelo de flujo de datos (data streaming) frente al modelo almacenar-para-despusprocesar (store-then-process). Mientras que en el modelo store-then-process, los datos son almacenados para ser posteriormente consultados; en los sistemas de streaming, los datos son procesados a su llegada al sistema, produciendo respuestas continuas sin llegar a almacenarse. Esta nueva visin impone desafos para el procesamiento de datos al vuelo: 1) las respuestas deben producirse de manera continua cada vez que nuevos datos llegan al sistema; 2) los datos son accedidos solo una vez y, generalmente, no son almacenados en su totalidad; y 3) el tiempo de procesamiento por dato para producir una respuesta debe ser bajo. Aunque existen dos modelos para el cmputo de respuestas continuas, el modelo evolutivo y el de ventana deslizante; ste segundo se ajusta mejor en ciertas aplicaciones al considerar nicamente los datos recibidos ms recientemente, en lugar de todo el histrico de datos. En los ltimos aos, la minera de datos en streaming se ha centrado en el modelo evolutivo. Mientras que, en el modelo de ventana deslizante, el trabajo presentado es ms reducido ya que estos algoritmos no slo deben de ser incrementales si no que deben borrar la informacin que caduca por el deslizamiento de la ventana manteniendo los anteriores tres desafos. Una de las tareas fundamentales en minera de datos es la bsqueda de agrupaciones donde, dado un conjunto de datos, el objetivo es encontrar grupos representativos, de manera que se tenga una descripcin sinttica del conjunto. Estas agrupaciones son fundamentales en aplicaciones como la deteccin de intrusos en la red o la segmentacin de clientes en el marketing y la publicidad. Debido a las cantidades masivas de datos que deben procesarse en este tipo de aplicaciones (millones de eventos por segundo), las soluciones centralizadas puede ser incapaz de hacer frente a las restricciones de tiempo de procesamiento, por lo que deben recurrir a descartar datos durante los picos de carga. Para evitar esta perdida de datos, se impone el procesamiento distribuido de streams, en concreto, los algoritmos de agrupamiento deben ser adaptados para este tipo de entornos, en los que los datos estn distribuidos. En streaming, la investigacin no solo se centra en el diseo para tareas generales, como la agrupacin, sino tambin en la bsqueda de nuevos enfoques que se adapten mejor a escenarios particulares. Como ejemplo, un mecanismo de agrupacin ad-hoc resulta ser ms adecuado para la defensa contra la denegacin de servicio distribuida (Distributed Denial of Services, DDoS) que el problema tradicional de k-medias. En esta tesis se pretende contribuir en el problema agrupamiento en streaming tanto en entornos centralizados y distribuidos. Hemos diseado un algoritmo centralizado de clustering mostrando las capacidades para descubrir agrupaciones de alta calidad en bajo tiempo frente a otras soluciones del estado del arte, en una amplia evaluacin. Adems, se ha trabajado sobre una estructura que reduce notablemente el espacio de memoria necesario, controlando, en todo momento, el error de los cmputos. Nuestro trabajo tambin proporciona dos protocolos de distribucin del cmputo de agrupaciones. Se han analizado dos caractersticas fundamentales: el impacto sobre la calidad del clustering al realizar el cmputo distribuido y las condiciones necesarias para la reduccin del tiempo de procesamiento frente a la solucin centralizada. Finalmente, hemos desarrollado un entorno para la deteccin de ataques DDoS basado en agrupaciones. En este ltimo caso, se ha caracterizado el tipo de ataques detectados y se ha desarrollado una evaluacin sobre la eficiencia y eficacia de la mitigacin del impacto del ataque. ABSTRACT Advances in hardware allow to collect huge volumes of data emerging applications that must provide information in near-real time, e.g., patient monitoring, health monitoring of water pipes, etc. The data streaming model emerges to comply with these applications overcoming the traditional store-then-process model. With the store-then-process model, data is stored before being consulted; while, in streaming, data are processed on the fly producing continuous responses. The challenges of streaming for processing data on the fly are the following: 1) responses must be produced continuously whenever new data arrives in the system; 2) data is accessed only once and is generally not maintained in its entirety, and 3) data processing time to produce a response should be low. Two models exist to compute continuous responses: the evolving model and the sliding window model; the latter fits best with applications must be computed over the most recently data rather than all the previous data. In recent years, research in the context of data stream mining has focused mainly on the evolving model. In the sliding window model, the work presented is smaller since these algorithms must be incremental and they must delete the information which expires when the window slides. Clustering is one of the fundamental techniques of data mining and is used to analyze data sets in order to find representative groups that provide a concise description of the data being processed. Clustering is critical in applications such as network intrusion detection or customer segmentation in marketing and advertising. Due to the huge amount of data that must be processed by such applications (up to millions of events per second), centralized solutions are usually unable to cope with timing restrictions and recur to shedding techniques where data is discarded during load peaks. To avoid discarding of data, processing of streams (such as clustering) must be distributed and adapted to environments where information is distributed. In streaming, research does not only focus on designing for general tasks, such as clustering, but also in finding new approaches that fit bests with particular scenarios. As an example, an ad-hoc grouping mechanism turns out to be more adequate than k-means for defense against Distributed Denial of Service (DDoS). This thesis contributes to the data stream mining clustering technique both for centralized and distributed environments. We present a centralized clustering algorithm showing capabilities to discover clusters of high quality in low time and we provide a comparison with existing state of the art solutions. We have worked on a data structure that significantly reduces memory requirements while controlling the error of the clusters statistics. We also provide two distributed clustering protocols. We focus on the analysis of two key features: the impact on the clustering quality when computation is distributed and the requirements for reducing the processing time compared to the centralized solution. Finally, with respect to ad-hoc grouping techniques, we have developed a DDoS detection framework based on clustering.We have characterized the attacks detected and we have evaluated the efficiency and effectiveness of mitigating the attack impact.
Resumo:
Of the many state-of-the-art methods for cooperative localization in wireless sensor networks (WSN), only very few adapt well to mobile networks. The main problems of the well-known algorithms, based on nonparametric belief propagation (NBP), are the high communication cost and inefficient sampling techniques. Moreover, they either do not use smoothing or just apply it o ine. Therefore, in this article, we propose more flexible and effcient variants of NBP for cooperative localization in mobile networks. In particular, we provide: i) an optional 1-lag smoothing done almost in real-time, ii) a novel low-cost communication protocol based on package approximation and censoring, iii) higher robustness of the standard mixture importance sampling (MIS) technique, and iv) a higher amount of information in the importance densities by using the population Monte Carlo (PMC) approach, or an auxiliary variable. Through extensive simulations, we confirmed that all the proposed techniques outperform the standard NBP method.
Resumo:
This paper presents an overview of preliminary results of investigations within the WHERE2 Project on identifying promising avenues for location aided enhancements to wireless communication systems. The wide ranging contributions are organized according to the following targeted systems: cellular networks, mobile ad hoc networks (MANETs) and cognitive radio. Location based approaches are found to alleviate significant signaling overhead in various forms of modern communication paradigms that are very information hungry in terms of channel state information at the transmitter(s). And this at a reasonable cost given the ubiquitous availability of location information in recent wireless standards or smart phones. Location tracking furthermore opens the new perspective of slow fading prediction.
Resumo:
La presente tesis doctoral se orienta al estudio y anlisis de los caminos empedrados antiguos, desde la poca prerromana, tanto desde el punto de vista histrico como desde el tcnico. La cuantificacin de la romanidad de un camino representa un objetivo importante para la mayora de los estudiosos de la caminera antigua, as como para los arquelogos, por los datos que ofrece acerca del uso del territorio, los trazados de caminos en la antigedad y los trficos asociados. Cuantificar la romanidad de un camino no es tarea sencilla debido a que intervienen multitud de condicionantes que estn vivos y son cambiantes como consecuencia del dinamismo inherente al propio camino. En cuanto al aspecto histrico, se realiza una descripcin y anlisis de la evolucin del camino en la Pennsula Ibrica desde sus orgenes hasta mediados del siglo XX, que permite diferenciar la red itineraria segn su momento histrico. As mismo, se describen y analizan: las ruedas y los carros desde sus orgenes, especialmente en la poca romana -incluyendo una toma de medidas de distintos tipos de carro, existentes en instituciones y colecciones particulares-; las tcnicas de transporte en la antigedad y las caractersticas de la infraestructura viaria de poca romana, detallando aspectos generales de sus tcnicas de ingeniera y construccin. Desde el punto de vista tcnico, el enfoque metodolgico ha sido definir un ndice de Romanidad del Camino (IRC) para la datacin de vas romanas empedradas, basado en un anlisis multicriterio, a partir de los distintos factores que caracterizan su romanidad. Se ha realizado un exhaustivo estudio de campo, con la correspondiente toma de datos en las vas. Se han realizado una serie de ensayos de laboratorio con un prototipo creado exprofeso para simular el desgaste de la piedra producido por el traqueteo del carro al circular por el camino empedrado y dar una hiptesis de datacin del camino. Se ha realizado un tratamiento estadstico con la muestra de datos medidos en campo. Se ha definido adems el concepto de elasticidad de rodera usando la nocin de derivada elstica. En cuanto a los resultados obtenidos: se ha calculado el ndice de Romanidad del Camino (IRC) en una serie de vas empedradas, para cuantificar su romanidad, obtenindose un resultado coherente con la hiptesis previa sobre la datacin de dichas vas; y se ha formulado un modelo exponencial para el nmero de frecuentaciones de carga que lo relaciona con la elasticidad de rodera y con su esbeltez y que se ha utilizado para relacionar la elasticidad de la rodera con la geologa de la roca. Se ha iniciado una lnea de investigacin sobre la estimacin de trficos histricos en la caminera antigua, considerando que el volumen de trfico a lo largo del tiempo en un tramo de va est relacionado con los valores de elasticidad de rodera de dicho tramo a travs de la tipologa de la roca. En resumen, la presente tesis doctoral proporciona un mtodo para sistematizar el estudio de los caminos antiguos, as como para datarlos y estimar la evolucin de sus trficos. The present Ph. D. Thesis aims to study and to analyze ancient cobbled ways, since pre-roman times, both from the historical and technical points of view. The quantification of the Roman character of a way represents an important target for most of the researchers of ancient ways, as well as for the archaeologists, due to information that it offers about the use of the territory, the tracings of ways in the antiquity and the associate flows. To quantify the Roman character of a way is not a simple task because it involves multitude of influent factors that are alive and variable as a result of the dynamism inherent to the way. As for the historical aspect, a description and analysis of the evolution of the way in the Iberian Peninsula from its origins until the middle of the twentieth century has been done. This allows us to distinguish between elements of the network according to its historical moment. Likewise, a description and analysis is given about: the wheels and the cars since their origins, especially in the Roman time - including a capture of measurements of different types of car, belonging to institutions and to particular collections-; the transport techniques on the antiquity and the characteristics of the road infrastructure of Roman epoch, detailing general technical engineering and constructive aspects. From the technical point of view, the methodological approach has been to define an Index of the Roman Character of the Way (IRC) for the dating of cobbled Roman routes, based on a multi-criterion analysis, involving different factors typical of Roman ways. An exhaustive field study has been realized, with the corresponding capture of information in the routes. A series of laboratory essays has been realized with an ad hoc prototype created to simulate the wear of the stone produced by cars circulating along the cobbled way, and to give a dating hypothesis of the way. A statistical treatment has been realized with the sample of information measured in field. There has been defined also the concept of elasticity of rolling trace using the notion of elastic derivative. As for the obtained results: there has been calculated the Index of Roman Character of the Way (IRC) in a series of cobbled routes, to quantify its Roman character, obtaining a coherent result with the previous dating hypothesis of the above mentioned routes; and an exponential model has been formulated for the number of frequent attendances of load that relates this number to the elasticity of rolling trace and to its slenderness and that has been used to relate the elasticity of the rolling trace to the geology of the rock. An investigation line has been opened about the estimation of historical flows in ancient ways, considering that the traffic volume over the course of time in a route stretch is related to the values of elasticity of rolling trace of the above mentioned stretch by means of the typology of the rock. In short, the present Ph. D. Thesis provides a method to systematize the study of ancient ways, as well as to date them and to estimate the evolution of their flows.
Resumo:
La presente investigacin, tiene como objetivo analizar las influencias que ejercen los recursos intangibles (Gestin del Conocimiento, Marca, Reputacin Organizacional y Responsabilidad Social) en la gestin estratgica de las instituciones de educacin superior (IES) y el impacto de los mismos en los procesos de innovacin a travs del valor aadido que se transfiere al entorno. Se considera importante realizar un estudio sobre este tema dado que son las IES las encargadas de proporcionar los conocimientos y los nuevos hallazgos en innovaciones tecnolgicas, que son transferidas al tejido productivo de las regiones, lo que proporciona crecimiento econmico y mejoras en la calidad de vida. El estudio se enmarca dentro de los postulados de la teora de los recursos y las capacidades (TRC) y de los intangibles, los cuales sirven de base a la investigacin. Se plante un sistema de hiptesis subdividido en dos vas de influencias. La primera, donde se analizan las influencias directas que ejercen los recursos intangibles sobre los resultados de las IES. La otra va es la indirecta, que estudia las influencias que ejercen los recursos intangibles gestionados estratgicamente sobre los resultados de las IES. Esta investigacin se ha concebido como no experimental, de tipo exploratorio, basada en el paradigma que busca explicar un fenmeno (variable dependiente) a travs del comportamiento de las variables independientes. Es un estudio transversal, cuantitativo, que intenta describir las causas del fenmeno. Con el objeto de determinar las influencias o relaciones de causalidad que subyacen entre las variables, se utiliz la tcnica del modelo de ecuaciones estructurales (SEM). La poblacin objeto de estudio estuvo constituida por los 857 individuos pertenecientes a los consejos directivos de las IES, que forman parte de las base de datos que gestiona el Consorcio de Escuelas de Ingeniera de Latinoamrica y del Caribe y la Universidad Politcnica de Madrid, con un tamao de muestra significativa de 250 directivos, lo que representa el 29,42% de la poblacin. Como fuentes de recoleccin de informacin se utilizaron fuentes primarias y secundarias. Para recabar la informacin primaria se dise un cuestionario (ad hoc), el cual fue validado por expertos. La informacin de fuentes secundarias se extrajo de la bases de datos de la Red Iberoamericana de Ciencia y Tecnologa (RICYT). Los resultados obtenidos indican que las influencias directas que pueden ejercer los recursos intangibles (Gestin del Conocimiento, Marca, Reputacin Organizacional y Responsabilidad Social) no son significativas, por ello se rechazaron todas las hiptesis de la va de influencia directa. Asimismo, de acuerdo con el contraste realizado al submodelo que representa la va de influencia indirecta, resultaron significativas las influencias que ejercen los intangibles Gestin del Conocimiento y Reputacin Organizacional, gestionadas estratgicamente sobre los resultados con valor aadido generado por las IES y transferidos al entorno. Sin embargo, no se apoyan todas las hiptesis del modelo, debido a que los constructos Marca y Responsabilidad Social resultaron no significativos. Las teoras sobre intangibles enmarcadas en la TRC no son del todo robustas y requieren de mayores esfuerzos por parte de los investigadores para lograr definir los constructos a utilizar. De igual forma, se sigue corroborando el desfase que existe entre las teoras que sustentan la investigacin y las comprobaciones empricas de las mismas. Adems, se evidencia que las IES enfocan su actuacin hacia la academia, por encima de las otras funciones, otorgando a la enseanza e investigacin y a la reputacin organizacional una mayor importancia. Sin embargo, debido a su naturaleza no empresarial, las IES siguen manteniendo una filosofa de gestin enfocada a la generacin y transmisin de conocimientos que crean reputacin. Se excluyen los intangibles Marca y Responsabilidad Social, por considerar que no aportan valor a sus procesos internos o que estn inmersos dentro de otros recursos intangibles. En conclusin, se corrobora el atraso de la gestin estratgica que presentan las IES en Latinoamrica. Se comprueba la no aplicacin de postulados bsicos de la gerencia moderna que contribuyan al manejo eficiente de todos sus recursos y al logro de sus objetivos. Esto deriva en la necesidad de modernizar la visin estratgica de las IES y en crear mejores mecanismos para lograr reconocer, mantener, proteger y desarrollar los Recursos Intangibles que poseen, realizando combinaciones de recursos ptimas, que maximicen la creacin de valor para s mismas y para la sociedad a la que pertenecen. ABSTRACT This research aims to analyze the influences exerted by intangible resources (Knowledge Management, Brand, Organizational Reputation and Social Responsibility) in the strategic management of higher education institutions (HEIs) and their impact in the innovation processes through the added value that is transferred to the environment. It is considered important to conduct a study on this issue since HEIs are responsible for providing knowledge and new findings on technological innovations, which are then, transferred to the productive fabric of these regions, providing economic growth and improvements in quality of life. The study is framed within the tenets of the Theory of Resources and Capabilities (TRC) and of intangibles which underlie this research. A system of hypotheses was raised which was subdivided into two pathways of influences. In the first system the direct influences exerted by intangible resources on the results of the IES are analyzed. The other system focuses on the indirect influences exerted by the strategically managed intangible resources on the HEIs results. This research is designed as experimental, exploratory and based on the paradigm that seeks to explain a phenomenon (the dependent variable) through the behavior of the independent variables. It is a crosssectional, quantitative study, which attempts to describe the causes of the phenomenon. In order to determine the influences or causal relationships among variables the structural equation modeling technique (SEM) was used. The population under study consisted of 857 individuals from the boards of HEIs, which are part of the database managed by the Consortium of Engineering Schools in Latin America and the Caribbean and the Technical University of Madrid, with a significant sample size of 250 managers which represents 29.42% of the population. As sources of information gathering primary and secondary sources were used. To collect primary information an ad-hoc questionnaire which was validated by experts was designed. The secondary information was extracted from the database of the Latin American Network of Science and Technology (RICYT). The results obtained indicate that the direct influences that intangible resources (Knowledge Management, Brand, Organizational Reputation and Social Responsibility) can exert are not significant. Therefore, all hypotheses related to direct influence were rejected. Also, according to the test made with the system which represents the indirect channel of influence, significant influences were exerted on the results with added value generated by the HEIs by the intangibles Knowledge Management and Organizational Reputation when they were managed strategically. However, all model hypotheses are not supported, because the constructs Brand and Social Responsibility were not significant. Theories of intangibles within the framework of the Theory of Resources and Capabilities are not entirely robust and require greater efforts by researchers to define the constructs to be used. Similarly the existing gap between the theories underpinning research and the empirical tests continues to be corroborated. In addition, there is evidence that HEIs focus their action on the academy neglecting the other functions, giving more importance to teaching, research and organizational reputation. However, due to their non-business nature, HEIs still maintain a management philosophy focused on the generation and transmission of knowledge which leads to reputation. The intangibles Brand and Social Responsibility are excluded, considering that they do not add value to their internal processes or are embedded within other intangible resources. In conclusion, the backwardness of HEIs strategic management in Latin America is confirmed. The lack of application of the basic principles of modern management that contribute to the efficient administration of all the resources and the achievement of objectives is proven. This leads to the need to modernize the strategic vision of HEIs and the need for better mechanisms to recognize, maintain, protect and develop the intangible resources they possess, achieving optimal combinations of resources in order to maximize the creation of value for them and for the society to which they belong.
Resumo:
En las ciudades europeas, los patrones de movilidad son cada vez ms complejos debido fundamentalmente a un crecimiento sostenido de la poblacin as como a la tendencia de dispersin de los ncleos urbanos. En consecuencia, muchos de los usuarios del transporte pblico se ven obligados a combinar varios modos o servicios de transporte para completar sus viajes diarios. Por tanto, el mayor reto de las ciudades es conseguir una mejora e incremento en la movilidad mientras que al mismo tiempo se reducen problemas como la congestin, los accidentes y la contaminacin (COM, 2006). Un principio bsico para lograr una movilidad sostenible es reducir los inconvenientes y molestias derivados de la transferencia o ruptura del viaje. En este sentido, los intercambiadores de transporte pblico juegan un papel fundamental como nodos de la red urbana de transporte y la calidad del servicio prestado en ellos tiene una influencia directa sobre la experiencia diaria de los viajeros. Como seal Terzis and Last (2002), un intercambiador de transportes urbano eficiente debe ser competitivo y al mismo tiempo, debe ser atractivo para los usuarios dado que sus experiencias fsicas y sus reacciones psicolgicas se ven influenciadas de manera significativa por el diseo y operacin del intercambiador. Sin embargo, todava no existen standards o normativas a nivel europeo que especifiquen como deberan ser estos intercambiadores. Esta tesis doctoral proporciona conocimientos y herramientas de anlisis dirigidas a planificadores y gestores de los propios intercambiadores con el fin de entender mejor el funcionamiento de los intercambiadores y gestionar as los recursos disponibles. As mismo, esta tesis identifica los factores clave en el diseo y operacin de intercambiadores urbanos de transporte y proporciona algunas guas generales de planificacin en base a ellos. Dado que las percepciones de los usuarios son particularmente importantes para definir polticas adecuadas para intercambiadores, se dise y se llev a cabo en 2013 una encuesta de satisfaccin al viajero en tres intercambiadores de transporte urbano europeos: Moncloa (Madrid, Espaa), Kamppi (Helsinki, Finlandia) e Ilford Railway Station ( Londres, Reino Unido). En resumen, esta tesis pone de relieve la naturaleza ambivalente de los intercambiadores urbanos de transporte, es decir, como nodos de la red de transporte y como lugares en s mismos donde los usuarios pasan tiempo dentro de ellos y propone algunas recomendaciones para hacer ms atractivos los intercambiadores a los usuarios. Travel patterns in European urban areas are becoming increasingly complex due to a sustained increase in the urban population and the trend towards urban sprawl. Consequently, many public transport users need to combine several modes or transport services to complete their daily trips. Therefore, the challenge facing all major cities is how to increase mobility while at the same time reducing congestion, accididents and pollution (COM, 2006). Reducing the inconvenience inherent in transferring between modes is a basic principle for achieving sustainable mobility. In this regard, transport interchanges play a key role as urban transport network nodes, and the quality of the service provided in them has a direct influence on travellers' daily experience. As noted by Terzis and Last (2000), an efficient urban transport interchange must be competitive and, at the same time, be attractive for users given that their physical experiences and psychological reactions are significantly influenced by the design and operation of the interchange. However, yet there are no standards or regulations specifying the form these interchanges should take in Europe. This doctoral thesis provides knowledge and analysis tools addressed to developers and managers in order to understand better the performance of an urban transport interchange and manage the available resources properly. Likewise, key factors of the design and operation of urban transport interchanges are identified and some 'Planning guidelines' are proposed on the basis on them. Since the users' perceptions of their experience are particularly important for achieving the most appropriate policy measures for interchanges, an adhoc travellers' satisfaction survey was designed and carried out in 2013 at three European transport interchanges: Moncloa (Madrid, Spain), Kamppi (Helsinki, Finland) and Ilford Railway Station (London, United Kingdom) In summary, this thesis highlights the ambivalent nature of the urban transport interchanges, i.e. as nodes within the transport network and as places where users spending time and proposes some policy recommendations in order to make urban transport interchanges attractive for users.
Resumo:
En la actualidad no se concibe una empresa, por pequea que esta sea, sin algn tipo de servicio TI. Se presenta para cada empresa el reto de emprender proyectos para desarrollar o contratar servicios de TI que soporten los diferentes procesos de negocio de la empresa. Por otro lado, a menos que los servicios de TI estn aislados de toda red, lo cual es prcticamente imposible en la actualidad, no existe un servicio o un proyecto que lo desarrolle garantizando el 100% de seguridad. As la empresa maneja una dualidad entre desarrollar productos/servicios de TI seguros y el mantenimiento constante de sus servicios TI en estado seguro. La gestin de los proyectos para el desarrollo de los servicios de TI se aborda, en la mayora de las empresas, aplicando distintas prcticas, utilizadas en otros proyectos y recomendadas, a tal efecto, por marcos y estndares con mayor reconocimiento. Por lo general, estos marcos incluyen, entre sus procesos, la gestin de los riesgos orientada al cumplimiento de plazos, de costes y, a veces, de la funcionalidad del producto o servicio. Sin embargo, en estas prcticas se obvian los aspectos de seguridad (confidencialidad, integridad y disponibilidad) del producto/servicio, necesarios durante el desarrollo del proyecto. Adems, una vez entregado el servicio, a nivel operativo, cuando surge algn fallo relativo a estos aspectos de seguridad, se aplican soluciones ad-hoc. Esto provoca grandes prdidas y, en ocasiones, pone en peligro la continuidad de la propia empresa. Este problema, se va acrecentando cada da ms, en cualquier tipo de empresa y, son las PYMEs, por su la falta de conocimiento del problema en s y la escasez de recursos metodolgicos y tcnicos, las empresas ms vulnerables. Por todo lo anterior, esta tesis doctoral tiene un doble objetivo. En primer lugar, demostrar la necesidad de contar con un marco de trabajo que, integrado con otros posibles marcos y estndares, sea sencillo de aplicar en distintos tipos y envergaduras de proyectos, y que gue a las PYMEs en la gestin de proyectos para el desarrollo seguro y posterior mantenimiento de la seguridad de sus servicios de TI. En segundo lugar, cubrir esta necesidad desarrollando un marco de trabajo que ofrezca un modelo de proceso genrico aplicable sobre distintos patrones de proyecto y una librera de activos de seguridad que sirva a las PYMEs de gua durante el proceso de gestin del proyecto para el desarrollo seguro. El modelo de proceso del marco propuesto describe actividades en los tres niveles organizativos de la empresa (estratgico, tctico y operativo). Est basado en el ciclo de mejora continua (PDCA) y en la filosofa Seguridad por Diseo, propuesta por Siemens. Se detallan las prcticas especficas de cada actividad, las entradas, salidas, acciones, roles, KPIs y tcnicas aplicables para cada actividad. Estas prcticas especficas pueden aplicarse o no, a criterio del jefe de proyecto y de acuerdo al estado de la empresa y proyecto que se quiera desarrollar, estableciendo as distintos patrones de proceso. Para la validacin del marco se han elegido dos PYMEs. La primera del sector servicios y la segunda del sector TIC. El modelo de proceso ha sido aplicado sobre un mismo patrn de proyecto que responde a necesidades comunes a ambas empresas. El patrn de proceso ha sido valorado en los proyectos elegidos en ambas empresas, antes y despus de su aplicacin. Los resultados del estudio, despus de su aplicacin en ambas empresas, han permitido la validacin del patrn de proceso, en la mejora de la gestin de proyecto para el desarrollo seguro de TI en las PYMEs. ABSTRACT Today a company without any IT service is not conceived, even if it is small either. It presents the challenge for each company to undertake projects to develop or contract IT services that support the different business processes of the company. On the other hand, unless IT services are isolated from whole network, which is virtually impossible at present, there is no service or project, which develops guaranteeing 100% security. So the company handles a duality, develop products / insurance IT services and constant maintenance of their IT services in a safe state. The project management for the development of IT services is addressed, in most companies, using different practices used in other projects and recommended for this purpose by frameworks and standards with greater recognition. Generally, these frameworks include, among its processes, risk management aimed at meeting deadlines, costs and, sometimes, the functionality of the product or service. However, safety issues such as confidentiality, integrity and availability of the product / service, necessary for the project, they are ignored in these practices. Moreover, once the service delivered at the operational level, when a fault on these safety issues arise, ad-hoc solutions are applied. This causes great losses and sometimes threatens the continuity of the company. This problem is adding more every day, in any kind of business and SMEs are, by their lack of knowledge of the problem itself and the lack of methodological and technical resources, the most vulnerable companies. For all these reasons, this thesis has two objectives. Firstly demonstrate the need for a framework that integrated with other possible frameworks and standards, it is simple to apply in different types and wingspans of projects, and to guide SMEs in the management of development projects safely, and subsequent maintenance of the security of their IT services. Secondly meet this need by developing a framework that provides a generic process model applicable to project different patterns and a library of security assets, which serve to guide SMEs in the process of project management for development safe. The process model describes the proposed activities under the three organizational levels of the company (strategic, tactical and operational). It is based on the continuous improvement cycle (PDCA) and Security Design philosophy proposed by Siemens. The specific practices, inputs, outputs, actions, roles, KPIs and techniques applicable to each activity are detailed. These specific practices can be applied or not, at the discretion of the project manager and according to the state of the company and project that the company wants to develop, establishing different patterns of process. Two SMEs have been chosen to validate the frame work. The first of the services sector and the second in the ICT sector. The process model has been applied on the same pattern project that responds to needs common to both companies. The process pattern has been valued at the selected projects in both companies before and after application. The results of the study, after application in both companies have enabled pattern validation process, improving project management for the safe development of IT in SMEs.
Resumo:
La computacin ubicua est extendiendo su aplicacin desde entornos especficos hacia el uso cotidiano; el Internet de las cosas (IoT, en ingls) es el ejemplo ms brillante de su aplicacin y de la complejidad intrnseca que tiene, en comparacin con el clsico desarrollo de aplicaciones. La principal caracterstica que diferencia la computacin ubicua de los otros tipos est en como se emplea la informacin de contexto. Las aplicaciones clsicas no usan en absoluto la informacin de contexto o usan slo una pequea parte de ella, integrndola de una forma ad hoc con una implementacin especfica para la aplicacin. La motivacin de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es informacin de contexto depende del tipo de aplicacin: por poner un ejemplo, para un editor de imgenes, la imagen es la informacin y sus metadatos, tales como la hora de grabacin o los ajustes de la cmara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cmara son la informacin, y el contexto es representado por los metadatos externos al fichero como la fecha de modificacin o la de ltimo acceso. Esto significa que es difcil compartir la informacin de contexto, y la presencia de un middleware de comunicacin que soporte el contexto de forma explcita simplifica el desarrollo de aplicaciones para computacin ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdera la compatibilidad con las aplicaciones que no lo usan, convirtiendo as dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementacin de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Funcin de Contexto. El contexto representa la informacin contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la funcin de contexto se evala usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lgica de gestin del contexto de aquella de la funcin de contexto, incrementando de esta forma la flexibilidad de la comunicacin entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vaco, las aplicaciones clsicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categoras. En cualquier caso la posible incompatibilidad semntica sigue existiendo ya que depende de la interpretacin que cada aplicacin hace de los datos y no puede ser solucionada por una tercera parte agnstica. El entorno IoT conlleva retos no slo de contexto, sino tambin de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podran estar interesadas en manipular esos datos est en continuo aumento. Hoy en da la respuesta a esa necesidad es la computacin en la nube, pero requiere que las aplicaciones sean no slo capaces de escalar, sino de hacerlo de forma elstica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, adems de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorizacin elstica. Esto implica que hay un problema bilateral: cmo puede una aplicacin escalar de forma elstica y cmo monitorizar esa aplicacin para saber cundo escalarla horizontalmente. E-SilboPS es la versin elstica de SilboPS y se adapta perfectamente como solucin para el problema de monitorizacin, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Adems est basado en un algoritmo recientemente diseado que muestra como aadir elasticidad a una aplicacin con distintas restricciones sobre el estado: sin estado, estado aislado con coordinacin externa y estado compartido con coordinacin general. Su evaluacin ensea como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cmo se comporta cada configuracin en comparacin con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuracin compensar el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atencin en la evaluacin de los despliegues con igual coste, para ver cul es la mejor solucin en relacin a una carga de trabajo dada. Como ltimo anlisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue ptimo en comparacin con uno subptimo. Efectivamente, segn el tipo de carga de trabajo, la estimacin puede ser tan baja como el 10 % para un ptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuracin sobredimensionada para la carga de trabajo. Esta estimacin de la mtrica de Karp-Flatt es importante para el sistema de gestin porque le permite conocer en que direccin (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una poltica de ampliacin. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publishers context and the subscribers context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Todays response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.
Resumo:
La proteccin de las aguas subterrneas es una prioridad de la poltica medioambiental de la UE. Por ello ha establecido un marco de prevencin y control de la contaminacin, que incluye provisiones para evaluar el estado qumico de las aguas y reducir la presencia de contaminantes en ellas. Las herramientas fundamentales para el desarrollo de dichas polticas son la Directiva Marco del Agua y la Directiva Hija de Aguas Subterrneas. Segn ellas, las aguas se consideran en buen estado qumico si: la concentracin medida o prevista de nitratos no supera los 50 mg/l y la de ingredientes activos de plaguicidas, de sus metabolitos y de los productos de reaccin no supera el 0,1 g/l (0,5 g/l para el total de los plaguicidas medidos) la concentracin de determinadas sustancias de riesgo es inferior al valor umbral fijado por los Estados miembros; se trata, como mnimo, del amonio, arsnico, cadmio, cloruro, plomo, mercurio, sulfatos, tricloroetileno y tetracloroetileno la concentracin de cualquier otro contaminante se ajusta a la definicin de buen estado qumico enunciada en el anexo V de la Directiva marco sobre la poltica de aguas en caso de superarse el valor correspondiente a una norma de calidad o a un valor umbral, una investigacin confirma, entre otros puntos, la falta de riesgo significativo para el medio ambiente. Analizar el comportamiento estadstico de los datos procedentes de la red de seguimiento y control puede resultar considerablemente complejo, debido al sesgo positivo que suelen presentar dichos datos y a su distribucin asimtrica, debido a la existencia de valores anmalos y diferentes tipos de suelos y mezclas de contaminantes. Adems, la distribucin de determinados componentes en el agua subterrnea puede presentar concentraciones por debajo del lmite de deteccin o no ser estacionaria debida a la existencia de tendencias lineales o estacionales. En el primer caso es necesario realizar estimaciones de esos valores desconocidos, mediante procedimientos que varan en funcin del porcentaje de valores por debajo del lmite de deteccin y el nmero de lmites de deteccin aplicables. En el segundo caso es necesario eliminar las tendencias de forma previa a la realizacin de contrastes de hiptesis sobre los residuos. Con esta tesis se ha pretendido establecer las bases estadsticas para el anlisis riguroso de los datos de las redes de calidad con objeto de realizar la evaluacin del estado qumico de las masas de agua subterrnea para la determinacin de tendencias al aumento en la concentracin de contaminantes y para la deteccin de empeoramientos significativos, tanto en los casos que se ha fijado un estndar de calidad por el organismo medioambiental competente como en aqullos que no ha sido as. Para disear una metodologa que permita contemplar la variedad de casos existentes, se han analizado los datos de la Red Oficial de Seguimiento y Control del Estado Qumico de las Aguas Subterrneas del Ministerio de Agricultura, Alimentacin y Medio Ambiente (Magrama). A continuacin, y dado que los Planes Hidrolgicos de Cuenca son la herramienta bsica de las Directivas, se ha seleccionado la Cuenca del Jcar, dada su designacin como cuenca piloto en la estrategia de implementacin comn (CIS) de la Comisin Europea. El objetivo principal de los grupos de trabajo creados para ello se dirigi a implementar la Directiva Derivada de Agua Subterrneas y los elementos de la Directiva Marco del Agua relacionadas, en especial la toma de datos en los puntos de control y la preparacin del primer Plan de Gestin de Cuencas Hidrogrficas. Dada la extensin de la zona y con objeto de analizar una masa de agua subterrnea (definida como la unidad de gestin en las Directivas), se ha seleccionado una zona piloto (Plana de Vinaroz Peiscola) en la que se han aplicado los procedimientos desarrollados con objeto de determinar el estado qumico de dicha masa. Los datos examinados no contienen en general valores de concentracin de contaminantes asociados a fuentes puntuales, por lo que para la realizacin del estudio se han seleccionado valores de concentracin de los datos ms comunes, es decir, nitratos y cloruros. La estrategia diseada combina el anlisis de tendencias con la elaboracin de intervalos de confianza cuando existe un estndar de calidad e intervalos de prediccin cuando no existe o se ha superado dicho estndar. De forma anloga se ha procedido en el caso de los valores por debajo del lmite de deteccin, tomando los valores disponibles en la zona piloto de la Plana de Sagunto y simulando diferentes grados de censura con objeto de comparar los resultados obtenidos con los intervalos producidos de los datos reales y verificar de esta forma la eficacia del mtodo. El resultado final es una metodologa general que integra los casos existentes y permite definir el estado qumico de una masa de agua subterrnea, verificar la existencia de impactos significativos en la calidad del agua subterrnea y evaluar la efectividad de los planes de medidas adoptados en el marco del Plan Hidrolgico de Cuenca. ABSTRACT Groundwater protection is a priority of the EU environmental policy. As a result, it has established a framework for prevention and control of pollution, which includes provisions for assessing the chemical status of waters and reducing the presence of contaminants in it. The measures include: criteria for assessing the chemical status of groundwater bodies criteria for identifying significant upward trends and sustained concentrations of contaminants and define starting points for reversal of such trends preventing and limiting indirect discharges of pollutants as a result of percolation through soil or subsoil. The basic tools for the development of such policies are the Water Framework Directive and Groundwater Daughter Directive. According to them, the groundwater bodies are considered in good status if: measured or predicted concentration of nitrate does not exceed 50 mg / l and the active ingredients of pesticides, their metabolites and reaction products do not exceed 0.1 mg / l (0.5 mg / l for total of pesticides measured) the concentration of certain hazardous substances is below the threshold set by the Member States concerned, at least, of ammonium, arsenic, cadmium, chloride, lead, mercury, sulphates, trichloroethylene and tetrachlorethylene the concentration of other contaminants fits the definition of good chemical status set out in Annex V of the Framework Directive on water policy If the value corresponding to a quality standard or a threshold value is exceeded, an investigation confirms, among other things, the lack of significant risk to the environment. Analyzing the statistical behaviour of the data from the monitoring networks may be considerably complex due to the positive bias which often presents such information and its asymmetrical distribution, due to the existence of outliers and different soil types and mixtures of pollutants. Furthermore, the distribution of certain components in groundwater may have concentrations below the detection limit or may not be stationary due to the existence of linear or seasonal trends. In the first case it is necessary to estimate these unknown values, through procedures that vary according to the percentage of values below the limit of detection and the number of applicable limits of detection. In the second case removing trends is needed before conducting hypothesis tests on residuals. This PhD thesis has intended to establish the statistical basis for the rigorous analysis of data quality networks in order to conduct the evaluation of the chemical status of groundwater bodies for determining upward and sustained trends in pollutant concentrations and for the detection of significant deterioration in cases in which an environmental standard has been set by the relevant environmental agency and those that have not. Aiming to design a comprehensive methodology to include the whole range of cases, data from the Groundwater Official Monitoring and Control Network of the Ministry of Agriculture, Food and Environment (Magrama) have been analysed. Then, since River Basin Management Plans are the basic tool of the Directives, the Jcar river Basin has been selected. The main reason is its designation as a pilot basin in the common implementation strategy (CIS) of the European Commission. The main objective of the ad hoc working groups is to implement the Daughter Ground Water Directive and elements of the Water Framework Directive related to groundwater, especially the data collection at control stations and the preparation of the first River Basin Management Plan. Given the size of the area and in order to analyze a groundwater body (defined as the management unit in the Directives), Plana de Vinaroz Pescola has been selected as pilot area. Procedures developed to determine the chemical status of that body have been then applied. The data examined do not generally contain pollutant concentration values associated with point sources, so for the study concentration values of the most common data, i.e., nitrates and chlorides have been selected. The designed strategy combines trend analysis with the development of confidence intervals when there is a standard of quality and prediction intervals when there is not or the standard has been exceeded. Similarly we have proceeded in the case of values below the detection limit, taking the available values in Plana de Sagunto pilot area and simulating different degrees of censoring in order to compare the results obtained with the intervals achieved from the actual data and verify in this way the effectiveness of the method. The end result is a general methodology that integrates existing cases to define the chemical status of a groundwater body, verify the existence of significant impacts on groundwater quality and evaluate the effectiveness of the action plans adopted in the framework of the River Basin Management Plan.