845 resultados para declarative, procedural, and reflective (DPR) model
Resumo:
The transition from last glacial to deglacial and subsequently to modern interglacial climate conditions was accompanied by abrupt shifts in the palaeoceanographic setting in the subpolar North Atlantic. Knowledge about the role that sea ice coverage played during these rapid climate reversals is limited since most marine sediment cores from the higher latitudes provide only a coarse temporal resolution and often poorly preserved microfossils. Here we present a highly resolved reconstruction of sea ice conditions that characterised the eastern Fram Strait - a key area for water mass exchange between the Arctic Ocean and the North Atlantic - for the past 30 ka BP. This reconstruction is based on the distribution of the sea ice biomarker IP25 and phytoplankton derived biomarkers in a sediment core from the continental slope of western Svalbard. During the late glacial (30 ka to 19 ka BP), recurrent advances and retreats of sea ice characterised the study area and point to a hitherto less considered oceanic (and/or atmospheric) variability. A long-lasting perennial sea ice coverage in eastern Fram Strait persisted only at the very end of the Last Glacial Maximum (i.e. from 19.2 to 17.6 ka BP) and was abruptly reduced at the onset of Heinrich Event 1 - coincident with or possibly even inducing the collapse of the Atlantic Meridional Overturning Circulation (AMOC). Further maximum sea ice conditions prevailed during the Younger Dryas cooling event and support the assumption of an AMOC reduction due to increased formation and export of Arctic sea ice through Fram Strait. A significant retreat of sea ice and sea surface warming are observed for the Early Holocene.
Resumo:
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.
Resumo:
Growing scarcity, increasing demand and bad management of water resources are causing weighty competition for water and consequently managers are facing more and more pressure in an attempt to satisfy users? requirement. In many regions agriculture is one of the most important users at river basin scale since it concentrates high volumes of water consumption during relatively short periods (irrigation season), with a significant economic, social and environmental impact. The interdisciplinary characteristics of related water resources problems require, as established in the Water Framework Directive 2000/60/EC, an integrated and participative approach to water management and assigns an essential role to economic analysis as a decision support tool. For this reason, a methodology is developed to analyse the economic and environmental implications of water resource management under different scenarios, with a focus on the agricultural sector. This research integrates both economic and hydrologic components in modelling, defining scenarios of water resource management with the goal of preventing critical situations, such as droughts. The model follows the Positive Mathematical Programming (PMP) approach, an innovative methodology successfully used for agricultural policy analysis in the last decade and also applied in several analyses regarding water use in agriculture. This approach has, among others, the very important capability of perfectly calibrating the baseline scenario using a very limited database. However one important disadvantage is its limited capacity to simulate activities non-observed during the reference period but which could be adopted if the scenario changed. To overcome this problem the classical methodology is extended in order to simulate a more realistic farmers? response to new agricultural policies or modified water availability. In this way an economic model has been developed to reproduce the farmers? behaviour within two irrigation districts in the Tiber High Valley. This economic model is then integrated with SIMBAT, an hydrologic model developed for the Tiber basin which allows to simulate the balance between the water volumes available at the Montedoglio dam and the water volumes required by the various irrigation users.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Solar drying is one of the important processes used for extending the shelf life of agricultural products. Regarding consumer requirements, solar drying should be more suitable in terms of curtailing total drying time and preserving product quality. Therefore, the objective of this study was to develop a fuzzy logic-based control system, which performs a ?human-operator-like? control approach through using the previously developed low-cost model-based sensors. Fuzzy logic toolbox of MatLab and Borland C++ Builder tool were utilized to develop a required control system. An experimental solar dryer, constructed by CONA SOLAR (Austria) was used during the development of the control system. Sensirion sensors were used to characterize the drying air at different positions in the dryer, and also the smart sensor SMART-1 was applied to be able to include the rate of wood water extraction into the control system (the difference of absolute humidity of the air between the outlet and the inlet of solar dryer is considered by SMART-1 to be the extracted water). A comprehensive test over a 3 week period for different fuzzy control models has been performed, and data, obtained from these experiments, were analyzed. Findings from this study would suggest that the developed fuzzy logic-based control system is able to tackle difficulties, related to the control of solar dryer process.
Resumo:
Effects of considering the particle comminution rate -kc- in addition to particle rumen outflow -kp- and the ruminal microbial contamination on estimates of by-pass and intestinal digestibility of DM, organic matter and crude protein were examined in perennial ryegrass and oat hays. By-pass kc-kp-based values of amino acids were also determined. This study was performed using particle transit, in situ and 15N techniques on three rumen and duodenum-cannulated wethers. The above estimates were determined using composite samples from rumen-incubated residues representative of feed by-pass. Considering the comminution rate, kc, modified the contribution of the incubated residues to these samples in both hays and revealed a higher microbial contamination, consistently in oat hay and only as a tendency for crude protein in ryegrass hay. Not considering kc or rumen microbial contamination overvalued by-pass and intestinal digestibility in both hays. Therefore, non-microbial-corrected kp-based values of intestinal digested crude protein were overestimated as compared with corrected and kc-kp-based values in ryegrass hay -17.4 vs 4.40%- and in oat hay -5.73 vs 0.19%-. Both factors should be considered to obtain accurate in situ estimates in grasses, as the protein value of grasses is very conditioned by the microbial synthesis derived from their ruminal fermentation. Consistent overvaluations of amino acid by-pass due to not correcting microbial contamination were detected in both hays, with large variable errors among amino acids. A similar degradation pattern of amino acids was recorded in both hays. Cysteine, methionine, leucine and valine were the most degradation-resistant amino acids.
Resumo:
Accessibility is an essential concept widely used to evaluate the impact of transport and land-use strategies in urban planning and policy making. Accessibility is typically evaluated by using separately a transport model or a land-use model. This paper embeds two accessibility indicators (i.e., potential and adaptive accessibility) in a land use and transport interaction (LUTI) model in order to assess transport policies implementation. The first aim is to define the adaptive accessibility, considering the competition factor at territorial level (e.g. workplaces and workers). The second aim is to identify the optimal implementation scenario of policy measures using potential and adaptive accessibility indicators. The analysis of the results in terms of social welfare and accessibility changes closes the paper. Two transport policy measures are applied in Madrid region: a cordon toll and increase bus frequency. They have been simulated through the MARS model (Metropolitan Activity Relocation Simulator, i.e. LUTI model). An optimisation procedure is performed by MARS for maximizing the value of the objective function in order to find the optimal policy implementation (first best). Both policy measures are evaluated in terms of accessibility. Results show that the introduction of the accessibility indicators (potential and adaptive) influence the optimal value of the toll price and bus frequency level, generating different results in terms of social welfare. Mapping the difference between potential and adaptive accessibility indicator shows that the main changes occur in areas where there is a strong competition among different land-use opportunities.
Resumo:
Ocean energy is a promising resource for renewable electricity generation that presents many advantages, such as being more predictable than wind energy, but also some disadvantages such as large and slow amplitude variations in the generated power. This paper presents a hardware-in-the-loop prototype that allows the study of the electric power profile generated by a wave power plant based on the oscillating water column (OWC) principle. In particular, it facilitates the development of new solutions to improve the intermittent profile of the power fed into the grid or the test of the OWC behavior when facing a voltage dip. Also, to obtain a more realistic model behavior, statistical models of real waves have been implemented.
Resumo:
We treat graphoid and separoid structures within the mathematical framework of model theory, specially suited for representing and analysing axiomatic systems with multiple semantics. We represent the graphoid axiom set in model theory, and translate algebraic separoid structures to another axiom set over the same symbols as graphoids. This brings both structures to a common, sound theoretical ground where they can be fairly compared. Our contribution further serves as a bridge between the most recent developments in formal logic research, and the well-known graphoid applications in probabilistic graphical modelling.
Resumo:
To provide a more general method for comparing survival experience, we propose a model that independently scales both hazard and time dimensions. To test the curve shape similarity of two time-dependent hazards, h1(t) and h2(t), we apply the proposed hazard relationship, h12(tKt)/ h1(t) = Kh, to h1. This relationship doubly scales h1 by the constant hazard and time scale factors, Kh and Kt, producing a transformed hazard, h12, with the same underlying curve shape as h1. We optimize the match of h12 to h2 by adjusting Kh and Kt. The corresponding survival relationship S12(tKt) = [S1(t)]KtKh transforms S1 into a new curve S12 of the same underlying shape that can be matched to the original S2. We apply this model to the curves for regional and local breast cancer contained in the National Cancer Institute's End Results Registry (1950-1973). Scaling the original regional curves, h1 and S1 with Kt = 1.769 and Kh = 0.263 produces transformed curves h12 and S12 that display congruence with the respective local curves, h2 and S2. This similarity of curve shapes suggests the application of the more complete curve shapes for regional disease as templates to predict the long-term survival pattern for local disease. By extension, this similarity raises the possibility of scaling early data for clinical trial curves according to templates of registry or previous trial curves, projecting long-term outcomes and reducing costs. The proposed model includes as special cases the widely used proportional hazards (Kt = 1) and accelerated life (KtKh = 1) models.
Resumo:
This study was supported by a Wellcome Trust-NIH PhD Studentship to SB, WDF and NV. Grant number 098252/Z/12/Z. SB, CHC and WDF are supported by the Intramural Research Program, NCI, NIH. NHG and WL are supported by the Intramural Research Program, NIA, NIH.