857 resultados para automatic assessment tool


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current trends in the European Higher Education Area (EHEA) are moving towards the continuous evaluation of the students in substitution of the traditional evaluation based on a single test or exam. This fact and the increase in the number of students during last years in Engineering Schools, requires to modify evaluation procedures making them compatible with the educational and research activities. This work presents a methodology for the automatic generation of questions. These questions can be used as self assessment questions by the student and/or as queries by the teacher. The proposed approach is based on the utilization of parametric questions, formulated as multiple choice questions and generated and supported by the utilization of common programs of data sheets and word processors. Through this approach, every teacher can apply the proposed methodology without the use of programs or tools different from those normally used in his/her daily activity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key scrutiny issues of new coming energy era would be the environmental impact of fusion facilities managing one kg of tritium. The potential change of committed dose regulatory limits together with the implementation of nuclear design principles (As Low as Reasonably achievable - ALARA -, Defense in Depth -D-i-D-) for fusion facilities could strongly impact on the cost of deployment of coming fusion technology. Accurate modeling of environmental tritium transport forms (HT, HTO) for the assessment of fusion facility dosimetric impact in Accidental case appears as of major interest. This paper considers different short-term releases of tritium forms (HT and HTO) to the atmosphere from a potential fusion reactor located in the Mediterranean Basin. This work models in detail the dispersion of tritium forms and dosimetric impact of selected environmental patterns both inland and in-sea using real topography and forecast meteorological data-fields (ECMWF/FLEXPART). We explore specific values of this ratio in different levels and we examine the influence of meteorological conditions in the HTO behavior for 24 hours. For this purpose we have used a tool which consists on a coupled Lagrangian ECMWF/FLEXPART model useful to follow real time releases of tritium at 10, 30 and 60 meters together with hourly observations of wind (and in some cases precipitations) to provide a short-range approximation of tritium cloud behavior. We have assessed inhalation doses. And also HTO/HT ratios in a representative set of cases during winter 2010 and spring 2011 for the 3 air levels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transport climate change impacts have become a worldwide concern. The use of Intelligent Transport Systems (ITS) could contribute to a more effective use of resources in toll road networks. Management of toll plazas is central to the reduction of greenhouse gas (GHG) emissions, as it is there that bottlenecks and congestion occur. This study focuses on management strategies aimed at reducing climate change impacts of toll plazas by managing toll collection systems. These strategies are based on the use of different collection system technologies – Electronic Toll Collection (ETC) and Open Road Tolling (ORT) – and on queue management. The carbon footprint of various toll plazas is determined by a proposed integrated methodology which estimates the carbon dioxide (CO2) emissions of the different operational stages at toll plazas (deceleration, service time, acceleration, and queuing) for the different toll collection systems. To validate the methodology, two main-line toll plazas of a Spanish toll highway were evaluated. The findings reveal that the application of new technologies to toll collection systems is an effective management strategy from an environmental point of view. The case studies revealed that ORT systems lead to savings of up to 70% of CO2 emissions at toll plazas, while ETC systems save 20% comparing to the manual ones. Furthermore, queue management can offer a 16% emissions savings when queue time is reduced by 116 seconds. The integrated methodology provides an efficient environmental management tool for toll plazas. The use of new technologies is the future of the decarbonization of toll plazas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, the authors examine the current status of different elements that integrate the landscape of the municipality of Olias del Rey in Toledo (Spain). A methodology for the study of rural roads, activity farming and local hunting management. We used Geographic Information Technologies (GIT) in order to optimize spatial information including the design of a Geographic Information System (GIS). In the acquisition of field data we have used vehicle "mobile mapping" instrumentation equipped with GNSS, LiDAR, digital cameras and odometer. The main objective is the integration of geoinformation and geovisualization of the information to provide a fundamental tool for rural planning and management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The 12 January 2010, an earthquake hit the city of Port-au-Prince, capital of Haiti. The earthquake reached a magnitude Mw 7.0 and the epicenter was located near the town of Léogâne, approximately 25 km west of the capital. The earthquake occurred in the boundary region separating the Caribbean plate and the North American plate. This plate boundary is dominated by left-lateral strike slip motion and compression, and accommodates about 20 mm/y slip, with the Caribbean plate moving eastward with respect to the North American plate (DeMets et al., 2000). Initially the location and focal mechanism of the earthquake seemed to involve straightforward accommodation of oblique relative motion between the Caribbean and North American plates along the Enriquillo-Plantain Garden fault system (EPGFZ), however Hayes et al., (2010) combined seismological observations, geologic field data and space geodetic measurements to show that, instead, the rupture process involved slip on multiple faults. Besides, the authors showed that remaining shallow shear strain will be released in future surface-rupturing earthquakes on the EPGFZ. In December 2010, a Spanish cooperation project financed by the Politechnical University of Madrid started with a clear objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. One of the tasks of the project was devoted to vulnerability assessment of the current building stock and the estimation of seismic risk scenarios. The study was carried out by following the capacity spectrum method as implemented in the software SELENA (Molina et al., 2010). The method requires a detailed classification of the building stock in predominant building typologies (according to the materials in the structure and walls, number of stories and age of construction) and the use of the building (residential, commercial, etc.). Later, the knowledge of the soil characteristics of the city and the simulation of a scenario earthquake will provide the seismic risk scenarios (damaged buildings). The initial results of the study show that one of the highest sources of uncertainties comes from the difficulty of achieving a precise building typologies classification due to the craft construction without any regulations. Also it is observed that although the occurrence of big earthquakes usually helps to decrease the vulnerability of the cities due to the collapse of low quality buildings and the reconstruction of seismically designed buildings, in the case of Port-au-Prince the seismic risk in most of the districts remains high, showing very vulnerable areas. Therefore the local authorities have to drive their efforts towards the quality control of the new buildings, the reinforcement of the existing building stock, the establishment of seismic normatives and the development of emergency planning also through the education of the population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, the building sector alone accounts for 40% of the total energy consumption in the European Union (EU). In most EU member states, about 70–90% of the buildings were constructed at least 20 years ago. Due to this, these buildings have a worse energy efficiency behavior than the new ones that comply with current regulations. As a consequence, acting on the existing building stock is needed, developing special methods on assessment and advice in order to reduce the total energy consumption. This article addresses a procedure allowing the classification and characterization of existing buildings facades. It can help researchers to achieve in-depth knowledge of the facades construction and therefore knowing their thermal behavior. Once knowing that, the most appropriate upgrading strategies can be established with the purpose of reducing the energy demand. Furthermore, the classified facade typologies have been verified, complying with current and future Spanish regulations and according to the results obtained, a series of upgrading strategies based on the opaque part and those in the translucent part, have been proposed. As a conclusion, this procedure helps us to select the most appropriate improvement measures for each type of facade in order to comply with current and future Spanish regulations. This proposed method has been tested in a specific neighborhood of Madrid, in a selected period of time, between 1950 and 1980, but it could be applicable to any other city.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leaf nitrogen and leaf surface area influence the exchange of gases between terrestrial ecosystems and the atmosphere, and play a significant role in the global cycles of carbon, nitrogen and water. The purpose of this study is to use field-based and satellite remote-sensing-based methods to assess leaf nitrogen pools in five diverse European agricultural landscapes located in Denmark, Scotland (United Kingdom), Poland, the Netherlands and Italy. REGFLEC (REGularized canopy reFLECtance) is an advanced image-based inverse canopy radiative transfer modelling system which has shown proficiency for regional mapping of leaf area index (LAI) and leaf chlorophyll (CHLl) using remote sensing data. In this study, high spatial resolution (10–20 m) remote sensing images acquired from the multispectral sensors aboard the SPOT (Satellite For Observation of Earth) satellites were used to assess the capability of REGFLEC for mapping spatial variations in LAI, CHLland the relation to leaf nitrogen (Nl) data in five diverse European agricultural landscapes. REGFLEC is based on physical laws and includes an automatic model parameterization scheme which makes the tool independent of field data for model calibration. In this study, REGFLEC performance was evaluated using LAI measurements and non-destructive measurements (using a SPAD meter) of leaf-scale CHLl and Nl concentrations in 93 fields representing crop- and grasslands of the five landscapes. Furthermore, empirical relationships between field measurements (LAI, CHLl and Nl and five spectral vegetation indices (the Normalized Difference Vegetation Index, the Simple Ratio, the Enhanced Vegetation Index-2, the Green Normalized Difference Vegetation Index, and the green chlorophyll index) were used to assess field data coherence and to serve as a comparison basis for assessing REGFLEC model performance. The field measurements showed strong vertical CHLl gradient profiles in 26% of fields which affected REGFLEC performance as well as the relationships between spectral vegetation indices (SVIs) and field measurements. When the range of surface types increased, the REGFLEC results were in better agreement with field data than the empirical SVI regression models. Selecting only homogeneous canopies with uniform CHLl distributions as reference data for evaluation, REGFLEC was able to explain 69% of LAI observations (rmse = 0.76), 46% of measured canopy chlorophyll contents (rmse = 719 mg m−2) and 51% of measured canopy nitrogen contents (rmse = 2.7 g m−2). Better results were obtained for individual landscapes, except for Italy, where REGFLEC performed poorly due to a lack of dense vegetation canopies at the time of satellite recording. Presence of vegetation is needed to parameterize the REGFLEC model. Combining REGFLEC- and SVI-based model results to minimize errors for a "snap-shot" assessment of total leaf nitrogen pools in the five landscapes, results varied from 0.6 to 4.0 t km−2. Differences in leaf nitrogen pools between landscapes are attributed to seasonal variations, extents of agricultural area, species variations, and spatial variations in nutrient availability. In order to facilitate a substantial assessment of variations in Nl pools and their relation to landscape based nitrogen and carbon cycling processes, time series of satellite data are needed. The upcoming Sentinel-2 satellite mission will provide new multiple narrowband data opportunities at high spatio-temporal resolution which are expected to further improve remote sensing capabilities for mapping LAI, CHLl and Nl.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document explains the process of designing a methodology to evaluate Educational Innovation Groups, which are structures created within universities in the context of adaptation to the European Higher Education Area. These groups are committed to introduce innovation in educational processes as a means to improve educational quality. The assessment design is based on a participatory model of planning called Working With People, that tries to integrate the perspectives of all stakeholders. The aim of the methodology is to be a useful tool for the university to evaluate the work done by the groups, encourage the members to continue improving the quality of teaching and reorient the activities to fulfill the emergent needs that the university faces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Irrigators face the risk of not having enough water to meet their crops’ demand. There are different mechanisms to cope with this risk, including water markets (option contracts) or insurance. A farmer will purchase them when the expected utility change derived from the tool is positive. This paper presents a theoretical assessment of the farmer’s expected utility under two different option contracts, a drought insurance and a combination of an option contract and the insurance. We analyze the conditions that determine farmer’s reference for one instrument or the other and perform a numerical application that is relevant for a Spanish region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work covers the first validation efforts of the EVA Tracking System for the assessment of minimally invasive surgery (MIS) psychomotor skills. Instrument movements were recorded for 42 surgeons (4 expert, 22 residents, 16 novice medical students) and analyzed for a box trainer peg transfer task. Construct validation was established for 7/9 motion analysis parameters (MAPs). Concurrent validation was determined for 8/9 MAPs against the TrEndo Tracking System. Finally, automatic determination of surgical proficiency based on the MAPs was sought by 3 different approaches to supervised classification (LDA, SVM, ANFIS), with accuracy results of 61.9%, 83.3% and 80.9% respectively. Results not only reflect on the validation of EVA for skills? assessment, but also on the relevance of motion analysis of instruments in the determination of surgical competence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La planificación de la movilidad sostenible urbana es una tarea compleja que implica un alto grado de incertidumbre debido al horizonte de planificación a largo plazo, la amplia gama de paquetes de políticas posibles, la necesidad de una aplicación efectiva y eficiente, la gran escala geográfica, la necesidad de considerar objetivos económicos, sociales y ambientales, y la respuesta del viajero a los diferentes cursos de acción y su aceptabilidad política (Shiftan et al., 2003). Además, con las tendencias inevitables en motorización y urbanización, la demanda de terrenos y recursos de movilidad en las ciudades está aumentando dramáticamente. Como consecuencia de ello, los problemas de congestión de tráfico, deterioro ambiental, contaminación del aire, consumo de energía, desigualdades en la comunidad, etc. se hacen más y más críticos para la sociedad. Esta situación no es estable a largo plazo. Para enfrentarse a estos desafíos y conseguir un desarrollo sostenible, es necesario considerar una estrategia de planificación urbana a largo plazo, que aborde las necesarias implicaciones potencialmente importantes. Esta tesis contribuye a las herramientas de evaluación a largo plazo de la movilidad urbana estableciendo una metodología innovadora para el análisis y optimización de dos tipos de medidas de gestión de la demanda del transporte (TDM). La metodología nueva realizado se basa en la flexibilización de la toma de decisiones basadas en utilidad, integrando diversos mecanismos de decisión contrariedad‐anticipada y combinados utilidad‐contrariedad en un marco integral de planificación del transporte. La metodología propuesta incluye dos aspectos principales: 1) La construcción de escenarios con una o varias medidas TDM usando el método de encuesta que incorpora la teoría “regret”. La construcción de escenarios para este trabajo se hace para considerar específicamente la implementación de cada medida TDM en el marco temporal y marco espacial. Al final, se construyen 13 escenarios TDM en términos del más deseable, el más posible y el de menor grado de “regret” como resultado de una encuesta en dos rondas a expertos en el tema. 2) A continuación se procede al desarrollo de un marco de evaluación estratégica, basado en un Análisis Multicriterio de Toma de Decisiones (Multicriteria Decision Analysis, MCDA) y en un modelo “regret”. Este marco de evaluación se utiliza para comparar la contribución de los distintos escenarios TDM a la movilidad sostenible y para determinar el mejor escenario utilizando no sólo el valor objetivo de utilidad objetivo obtenido en el análisis orientado a utilidad MCDA, sino también el valor de “regret” que se calcula por medio del modelo “regret” MCDA. La función objetivo del MCDA se integra en un modelo de interacción de uso del suelo y transporte que se usa para optimizar y evaluar los impactos a largo plazo de los escenarios TDM previamente construidos. Un modelo de “regret”, llamado “referencedependent regret model (RDRM)” (modelo de contrariedad dependiente de referencias), se ha adaptado para analizar la contribución de cada escenario TDM desde un punto de vista subjetivo. La validación de la metodología se realiza mediante su aplicación a un caso de estudio en la provincia de Madrid. La metodología propuesta define pues un procedimiento técnico detallado para la evaluación de los impactos estratégicos de la aplicación de medidas de gestión de la demanda en el transporte, que se considera que constituye una herramienta de planificación útil, transparente y flexible, tanto para los planificadores como para los responsables de la gestión del transporte. Planning sustainable urban mobility is a complex task involving a high degree of uncertainty due to the long‐term planning horizon, the wide spectrum of potential policy packages, the need for effective and efficient implementation, the large geographical scale, the necessity to consider economic, social, and environmental goals, and the traveller’s response to the various action courses and their political acceptability (Shiftan et al., 2003). Moreover, with the inevitable trends on motorisation and urbanisation, the demand for land and mobility in cities is growing dramatically. Consequently, the problems of traffic congestion, environmental deterioration, air pollution, energy consumption, and community inequity etc., are becoming more and more critical for the society (EU, 2011). Certainly, this course is not sustainable in the long term. To address this challenge and achieve sustainable development, a long‐term perspective strategic urban plan, with its potentially important implications, should be established. This thesis contributes on assessing long‐term urban mobility by establishing an innovative methodology for optimizing and evaluating two types of transport demand management measures (TDM). The new methodology aims at relaxing the utility‐based decision‐making assumption by embedding anticipated‐regret and combined utilityregret decision mechanisms in an integrated transport planning framework. The proposed methodology includes two major aspects: 1) Construction of policy scenarios within a single measure or combined TDM policy‐packages using the survey method incorporating the regret theory. The purpose of building the TDM scenarios in this work is to address the specific implementation in terms of time frame and geographic scale for each TDM measure. Finally, 13 TDM scenarios are built in terms of the most desirable, the most expected and the least regret choice by means of the two‐round Delphi based survey. 2) Development of the combined utility‐regret analysis framework based on multicriteria decision analysis (MCDA). This assessment framework is used to compare the contribution of the TDM scenario towards sustainable mobility and to determine the best scenario considering not only the objective utility value obtained from the utilitybased MCDA, but also a regret value that is calculated via a regret‐based MCDA. The objective function of the utility‐based MCDA is integrated in a land use and transport interaction model and is used for optimizing and assessing the long term impacts of the constructed TDM scenarios. A regret based model, called referente dependent regret model (RDRM) is adapted to analyse the contribution of each TDM scenario in terms of a subjective point of view. The suggested methodology is implemented and validated in the case of Madrid. It defines a comprehensive technical procedure for assessing strategic effects of transport demand management measures, which can be useful, transparent and flexible planning tool both for planners and decision‐makers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustaining irrigated agriculture to meet food production needs while maintaining aquatic ecosystems is at the heart of many policy debates in various parts of the world, especially in arid and semi-arid areas. Researchers and practitioners are increasingly calling for integrated approaches, and policy-makers are progressively supporting the inclusion of ecological and social aspects in water management programs. This paper contributes to this policy debate by providing an integrated economic-hydrologic modeling framework that captures the socio-economic and environmental effects of various policy initiatives and climate variability. This modeling integration includes a risk-based economic optimization model and a hydrologic water management simulation model that have been specified for the Middle Guadiana basin, a vulnerable drought-prone agro-ecological area with highly regulated river systems in southwest Spain. Namely, two key water policy interventions were investigated: the implementation of minimum environmental flows (supported by the European Water Framework Directive, EU WFD), and a reduction in the legal amount of water delivered for irrigation (planned measure included in the new Guadiana River Basin Management Plan, GRBMP, still under discussion). Results indicate that current patterns of excessive water use for irrigation in the basin may put environmental flow demands at risk, jeopardizing the WFD s goal of restoring the ?good ecological status? of water bodies by 2015. Conflicts between environmental and agricultural water uses will be stressed during prolonged dry episodes, and particularly in summer low-flow periods, when there is an important increase of crop irrigation water requirements. Securing minimum stream flows would entail a substantial reduction in irrigation water use for rice cultivation, which might affect the profitability and economic viability of small rice-growing farms located upstream in the river. The new GRBMP could contribute to balance competing water demands in the basin and to increase economic water productivity, but might not be sufficient to ensure the provision of environmental flows as required by the WFD. A thoroughly revision of the basin s water use concession system for irrigation seems to be needed in order to bring the GRBMP in line with the WFD objectives. Furthermore, the study illustrates that social, economic, institutional, and technological factors, in addition to bio-physical conditions, are important issues to be considered for designing and developing water management strategies. The research initiative presented in this paper demonstrates that hydro-economic models can explicitly integrate all these issues, constituting a valuable tool that could assist policy makers for implementing sustainable irrigation policies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The impedance-based stability-assessment method has turned out to be a very effective tool and its usage is rapidly growing in different applications ranging from the conventional interconnected dc/dc systems to the grid-connected renewable energy systems. The results are sometime given as a certain forbidden region in the complex plane out of which the impedance ratio--known as minor-loop gain--shall stay for ensuring robust stability. This letter discusses the circle-like forbidden region occupying minimum area in the complex plane, defined by applying maximum peak criteria, which is well-known theory in control engineering. The investigation shows that the circle-like forbidden region will ensure robust stability only if the impedance-based minor-loop gain is determined at the very input or output of each subsystem within the interconnected system. Experimental evidence is provided based on a small-scale dc/dc distributed system.