905 resultados para Spelling Harmonization
Resumo:
The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.
Resumo:
EU law’s impact on the meaning of the copyright work for a long time seemed limited to software and databases. But recent judgments of the CJEU (Infopaq, BSA, FootballAssociation [Murphy], Painer) suggest we have entered an era of harmonization of copyright subject-matter, after decades of focus on the scope of exclusive rights and their duration. Unlike before however, it is the Court and not the legislator that takes centre stage in shaping pivotal concepts. This article reviews the different readings and criticisms the recent case law on copyright works evokes in legal doctrine across the EU. It puts them in the wider perspective of the on-goingdevelopment towards uniform law and the role of the preliminary reference procedure in that process.
Resumo:
In order to display a homogeneous image using multiple projectors, differences in the projected intensities must be compensated. In this paper, we present novel approaches to combine and extend existing techniques for edge blending and luminance harmonization to achieve a detailed luminance control. Furthermore, we apply techniques for improving the contrast ratio of multi-segmented displays also to the black offset correction. We also present a simple scheme to involve the displayed context in the correction process to dynamically improve the contrast in brighter images. In addition, we present a metric to evaluate the different methods and their influence on the visual quality.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
In a fast changing world with growing concerns about biodiversity loss and an increasing number of animal and human diseases emerging from wildlife, the need for effective wildlife health investigations including both surveillance and research is now widely recognized. However, procedures applicable to and knowledge acquired from studies related to domestic animal and human health can be on partly extrapolated to wildlife. This article identifies requirements and challenges inherent in wildlife health investigations, reviews important definitions and novel health investigation methods, and proposes tools and strategies for effective wildlife health surveillance programs. Impediments to wildlife health investigations are largely related to zoological, behavioral and ecological characteristics of wildlife populations and to limited access to investigation materials. These concerns should not be viewed as insurmountable but it is imperative that they are considered in study design, data analysis and result interpretation. It is particularly crucial to remember that health surveillance does not begin in the laboratory but in the fields. In this context, participatory approaches and mutual respect are essential. Furthermore, interdisciplinarity and open minds are necessary because a wide range of tools and knowledge from different fields need to be integrated in wildlife health surveillance and research. The identification of factors contributing to disease emergence requires the comparison of health and ecological data over time and among geographical regions. Finally, there is a need for the development and validation of diagnostic tests for wildlife species and for data on free-ranging population densities. Training of health professionals in wildlife diseases should also be improved. Overall, the article particularly emphasizes five needs of wildlife health investigations: communication and collaboration; use of synergies and triangulation approaches; investments for the long term; systematic collection of metadata; and harmonization of definitions and methods.
Resumo:
Pronounced improvements in executive functions (EF) during preschool years have been documented in cross-sectional studies. However, longitudinal evidence on EF development during the transition to school and predictive associations between early EF and later school achievement are still scarce. This study examined developmental changes in EF across three time-points, the predictive value of EF for mathematical, reading and spelling skills and explored children's specific academic attainment as a function of early EF. Participants were 323 children following regular education; 160 children were enrolled in prekindergarten (younger cohort: 69 months) and 163 children in kindergarten (older cohort: 78.4 months) at the first assessment. Various tasks of EF were administered three times with an interval of one year each. Mathematical, reading and spelling skills were measured at the last assessment. Individual background characteristics such as vocabulary, non-verbal intelligence and socioeconomic status were included as control variables. In both cohorts, changes in EF were substantial; improvements in EF, however, were larger in preschoolers than school-aged children. EF assessed in preschool accounted for substantial variability in mathematical, reading and spelling achievement two years later, with low EF being especially associated with significant academic disadvantages in early school years. Given that EF continue to develop from preschool into primary school years and that starting with low EF is associated with lower school achievement, EF may be considered as a marker or risk for academic disabilities.
Resumo:
Introduction Prospective memory (PM), the ability to remember to perform intended activities in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to improve gradually over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in young school children in general, and even less is known about factors influencing its development. Currently, a number of studies suggest that executive functions (EF) are potentially influencing processes (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Additionally, metacognitive processes (MC: monitoring and control) are assumed to be involved while optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the relations between PM, EF and MC remain relatively unspecified. We intend to empirically examine the structural relations between these constructs. Method A cross-sectional study including 119 2nd graders (mage = 95.03, sdage = 4.82) will be presented. Participants (n = 68 girls) completed three EF tasks (stroop, updating, shifting), a computerised event-based PM task and a MC spelling task. The latent variables PM, EF and MC that were represented by manifest variables deriving from the conducted tasks, were interrelated by structural equation modelling. Results Analyses revealed clear associations between the three cognitive constructs PM, EF and MC (rpm-EF = .45, rpm-MC = .23, ref-MC = .20). A three factor model, as opposed to one or two factor models, appeared to fit excellently to the data (chi2(17, 119) = 18.86, p = .34, remsea = .030, cfi = .990, tli = .978). Discussion The results indicate that already in young elementary school children, PM, EF and MC are empirically well distinguishable, but nevertheless substantially interrelated. PM and EF seem to share a substantial amount of variance while for MC, more unique processes may be assumed.
Resumo:
Introduction. Prospective Memory (PM), defined as the ability to remember to perform intended activities at some point in the future (Kliegel & Jäger, 2007), is crucial to succeed in everyday life. PM seems to increase over the childhood years (Zimmermann & Meier, 2006), but yet little is known about PM competences in children in general, but also about factors that influence its development. Currently, a number of studies has focused on factors that might influence PM performance, with EF being potentially influencing mechanisms (Ford, Driscoll, Shum & Macaulay, 2012; Mahy & Moses, 2011). Also metacognitive processes (MC: monitoring and control) are assumed to be involved while learning or optimizing one’s performance (Krebs & Roebers, 2010; 2012; Roebers, Schmid, & Roderer, 2009). Yet, the empirical relation between PM, EF and MC remains rather unclear. We intend to examine relations and explain individual differences in PM performance. Method. An empirical cross-sectional study on 120 2nd graders will be presented. Participants completed six EF tasks (a Stroop, two Updating Tasks, two Shifting Tasks, a Flanker Task), a computerised event-based PM Task and a MC spelling task. Children were tested individually in two sessions of 30 minutes each. Each of the three EF components defined by Miyake, Friedman, Emerson, Witzki & Howerter (2002) was represented by two variables. PM performance was represented by PM accuracy. Metacognitive processes (control, monitoring) were represented separately. Results. Preliminary analyses (SEM) indicate a substantial association between EF (updating, inhibition) and PM. Further, MC seems to be significantly related only to EF. We will explore whether metacognitive monitoring is related to PM monitoring (Roebers, 2002; Mantylä, 2007). As to EF and MC, we expect the two domains to be empirically well distinguishable and nevertheless substantially interrelated. Discussion. The results are discussed on a broader and interindividual level.
Resumo:
The technical developments that have taken place in the preceding years (PET, hybrid imaging) have changed nuclear medicine. The future cooperation with radiologists will be challenging as well as positioning nuclear medicine in an European context. It can also be expected that education in nuclear medicine will undergo a harmonization process in the states of the European Union. In this paper, we describe how nuclear medicine education is organized in several European countries. We aim to stimulate constructive discussions on the future development of the specialization in nuclear medicine in Germany.
Resumo:
Oxygenated polycyclic aromatic hydrocarbons (oxy-PAHs) and nitrogen heterocyclic polycyclic aromatic compounds (N-PACs) are toxic, highly leachable and often abundant at sites that are also contaminated with PAHs. However, due to lack of regulations and standardized methods for their analysis, they are seldom included in monitoring and risk-assessment programs. This intercomparison study constitutes an important step in the harmonization of the analytical methods currently used, and may also be considered a first step towards the certification of reference materials for these compounds. The results showed that the participants were able to determine oxy-PAHs with accuracy similar to PAHs, with average determined mass fractions agreeing well with the known levels in a spiked soil and acceptable inter- and intra-laboratory precisions for all soils analyzed. For the N-PACs, the results were less satisfactory, and have to be improved by using analytical methods more specifically optimized for these compounds.
Resumo:
Since November 1994, the GROund-based Millimeter-wave Ozone Spectrometer (GROMOS) measures stratospheric and lower mesospheric ozone in Bern, Switzerland (47.95° N, 7.44° E). GROMOS is part of the Network for the Detection of Atmospheric Composition Change (NDACC). In July 2009, a Fast-Fourier-Transform spectrometer (FFTS) has been added as backend to GROMOS. The new FFTS and the original filter bench (FB) measured parallel for over two years. In October 2011, the FB has been turned off and the FFTS is now used to continue the ozone time series. For a consolidated ozone time series in the frame of NDACC, the quality of the stratospheric ozone profiles obtained with the FFTS has to be assessed. The FFTS results from July 2009 to December 2011 are compared to ozone profiles retrieved by the FB. FFTS and FB of the GROMOS microwave radiometer agree within 5% above 20 hPa. A later harmonization of both time series will be realized by taking the FFTS as benchmark for the FB. Ozone profiles from the FFTS are also compared to coinciding lidar measurements from the Observatoire Haute Provence (OHP), France. For the time period studied a maximum mean difference (lidar – GROMOS FFTS) of +3.8% at 3.1 hPa and a minimum mean difference of +1.4% at 8 hPa is found. Further, intercomparisons with ozone profiles from other independent instruments are performed: satellite measurements include MIPAS onboard ENVISAT, SABER onboard TIMED, MLS onboard EOS Aura and ACE-FTS onboard SCISAT-1. Additionally, ozonesondes launched from Payerne, Switzerland, are used in the lower stratosphere. Mean relative differences of GROMOS FFTS and these independent instruments are less than 10% between 50 and 0.1 hPa.
Resumo:
Carbon emissions from anthropogenic land use (LU) and land use change (LUC) are quantified with a Dynamic Global Vegetation Model for the past and the 21st century following Representative Concentration Pathways (RCPs). Wood harvesting and parallel abandonment and expansion of agricultural land in areas of shifting cultivation are explicitly simulated (gross LUC) based on the Land Use Harmonization (LUH) dataset and a proposed alternative method that relies on minimum input data and generically accounts for gross LUC. Cumulative global LUC emissions are 72 GtC by 1850 and 243 GtC by 2004 and 27–151 GtC for the next 95 yr following the different RCP scenarios. The alternative method reproduces results based on LUH data with full transition information within <0.1 GtC/yr over the last decades and bears potential for applications in combination with other LU scenarios. In the last decade, shifting cultivation and wood harvest within remaining forests including slash each contributed 19% to the mean annual emissions of 1.2 GtC/yr. These factors, in combination with amplification effects under elevated CO2, contribute substantially to future emissions from LUC in all RCPs.
Resumo:
Objective: There is convincing evidence that phonological, orthographic and semantic processes influence children’s ability to learn to read and spell words. So far only a few studies investigated the influence of implicit learning in literacy skills. Children are sensitive to the statistics of their learning environment. By frequent reading they acquire implicit knowledge about the frequency of letter patterns in written words, and they use this knowledge during reading and spelling. Additionally, semantic connections facilitate to storing of words in memory. Thus, the aim of the intervention study was to implement a word-picture training which is based on statistical and semantic learning. Furthermore, we aimed at examining the training effects in reading and spelling in comparison to an auditory-visual matching training and a working memory training program. Participants and Methods: One hundred and thirty-two children aged between 8 and 11 years participated in training in three weekly session of 12 minutes over 8 weeks, and completed other assessments of reading, spelling, working memory and intelligence before and after training. Results: Results revealed in general that the word-picture training and the auditory-visual matching training led to substantial gains in reading and spelling performance in comparison to the working-memory training. Although both children with and without learning difficulties profited in their reading and spelling after the word-picture training, the training program led to differential effects for the two groups. After the word-picture training on the one hand, children with learning difficulties profited more in spelling as children without learning difficulties, on the other hand, children without learning difficulties benefit more in word comprehension. Conclusions: These findings highlight the need for frequent reading trainings with semantic connections in order to support the acquisition of literacy skills.