954 resultados para Consistency checking


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract Originalsprache (englisch) Visual perception relies on a two-dimensional projection of the viewed scene on the retinas of both eyes. Thus, visual depth has to be reconstructed from a number of different cues that are subsequently integrated to obtain robust depth percepts. Existing models of sensory integration are mainly based on the reliabilities of individual cues and disregard potential cue interactions. In the current study, an extended Bayesian model is proposed that takes into account both cue reliability and consistency. Four experiments were carried out to test this model's predictions. Observers had to judge visual displays of hemi-cylinders with an elliptical cross section, which were constructed to allow for an orthogonal variation of several competing depth cues. In Experiment 1 and 2, observers estimated the cylinder's depth as defined by shading, texture, and motion gradients. The degree of consistency among these cues was systematically varied. It turned out that the extended Bayesian model provided a better fit to the empirical data compared to the traditional model which disregards covariations among cues. To circumvent the potentially problematic assessment of single-cue reliabilities, Experiment 3 used a multiple-observation task, which allowed for estimating perceptual weights from multiple-cue stimuli. Using the same multiple-observation task, the integration of stereoscopic disparity, shading, and texture gradients was examined in Experiment 4. It turned out that less reliable cues were downweighted in the combined percept. Moreover, a specific influence of cue consistency was revealed. Shading and disparity seemed to be processed interactively while other cue combinations could be well described by additive integration rules. These results suggest that cue combination in visual depth perception is highly flexible and depends on single-cue properties as well as on interrelations among cues. The extension of the traditional cue combination model is defended in terms of the necessity for robust perception in ecologically valid environments and the current findings are discussed in the light of emerging computational theories and neuroscientific approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The central topic of this thesis is the study of algorithms for type checking, both from the programming language and from the proof-theoretic point of view. A type checking algorithm takes a program or a proof, represented as a syntactical object, and checks its validity with respect to a specification or a statement. It is a central piece of compilers and proof assistants. We postulate that since type checkers are at the interface between proof theory and program theory, their study can let these two fields mutually enrich each other. We argue by two main instances: first, starting from the problem of proof reuse, we develop an incremental type checker; secondly, starting from a type checking program, we evidence a novel correspondence between natural deduction and the sequent calculus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die BBC-Serie SHERLOCK war 2011 eine der meistexportierten Fernsehproduktionen Großbritanniens und wurde weltweit in viele Sprachen übersetzt. Eine der Herausforderungen bei der Übersetzung stellen die Schrifteinblendungen der Serie (kurz: Inserts) dar. Die Inserts versprachlichen die Gedanken des Protagonisten, bilden schriftliche und digitale Kommunikation ab und zeichnen sich dabei durch ihre visuelle Auffälligkeit und teilweise als einzige Träger sprachlicher Kommunikation aus, womit sie zum wichtigen ästhetischen und narrativen Mittel in der Serie werden. Interessanterweise sind in der Übersetztung alle stilistischen Eigenschaften der Original-Inserts erhalten. In dieser Arbeit wird einerseits untersucht, wie Schrifteinblendungen im Film theoretisch beschrieben werden können, und andererseits, was sie in der Praxis so übersetzt werden können, wie es in der deutschen Version von Sherlock geschah. Zur theoretischen Beschreibung werden zunächst die Schrifteinblendungen in Sherlock Untertitelungsnormen anhand relevanter grundlegender semiotischer Dimensionen gegenübergestellt. Weiterhin wird das Verhältnis zwischen Schrifteinblendungen und Filmbild erkundet. Dazu wird geprüft, wie gut verschiedene Beschreibungsansätze zu Text-Bild-Verhältnissen aus der Sprachwissenschaft, Comicforschung, Übersetzungswissenschaft und Typografie die Einblendungen in Sherlock erklären können. Im praktischen Teil wird die Übersetzung der Einblendungen beleuchtet. Der Übersetzungsprozess bei der deutschen Version wird auf Grundlage eines Experteninterviews mit dem Synchronautor der Serie rekonstruiert, der auch für die Formulierung der Inserts zuständig war. Abschließend werden spezifische Übersetzungsprobleme der Inserts aus der zweiten Staffel von SHERLOCK diskutiert. Es zeigt sich, dass Untertitelungsnormen zur Beschreibung von Inserts nicht geeignet sind, da sie in Dimensionen wie Position, grafische Gestaltung, Animation, Soundeffekte, aber auch Timing stark eingeschränkt sind. Dies lässt sich durch das historisch geprägte Verständnis von Untertiteln erklären, die als möglichst wenig störendes Beiwerk zum fertigen Filmbild und -ablauf (notgedrungen) hinzugefügt werden, wohingegen für die Inserts in SHERLOCK teilweise sogar ein zentraler Platz in der Bild- und Szenenkomposition bereits bei den Dreharbeiten vorgesehen wurde. In Bezug auf Text-Bild-Verhältnisse zeigen sich die größten Parallelen zu Ansätzen aus der Comicforschung, da auch dort schriftliche Texte im Bild eingebettet sind anstatt andersherum. Allerdings sind auch diese Ansätze zur Beschreibung von Bewegung und Ton unzureichend. Die Erkundung der Erklärungsreichweite weiterer vielversprechender Konzepte, wie Interface und Usability, bleibt ein Ziel für künftige Studien. Aus dem Experteninterview lässt sich schließen, dass die Übersetzung von Inserts ein neues, noch unstandardisiertes Verfahren ist, in dem idiosynkratische praktische Lösungen zur sprachübergreifenden Kommunikation zwischen verschiedenen Prozessbeteiligten zum Einsatz kommen. Bei hochqualitative Produktionen zeigt ist auch für die ersetzende Insertübersetzung der Einsatz von Grafikern unerlässlich, zumindest für die Erstellung neuer Inserts als Übersetzungen von gefilmtem Text (Display). Hierbei sind die theoretisch möglichen Synergien zwischen Sprach- und Bildexperten noch nicht voll ausgeschöpft. Zudem zeigt sich Optimierungspotential mit Blick auf die Bereitstellung von sorgfältiger Dokumentation zur ausgangssprachlichen Version. Diese wäre als Referenzmaterial für die Übersetzung insbesondere auch für Zwecke der internationalen Qualitätssicherung relevant. Die übersetzten Inserts in der deutschen Version weisen insgesamt eine sehr hohe Qualität auf. Übersetzungsprobleme ergeben sich für das genretypische Element der Codes, die wegen ihrer Kompaktheit und multiplen Bezügen zum Film eine Herausforderung darstellen. Neben weiteren bekannten Übersetzungsproblemen wie intertextuellen Bezügen und Realia stellt sich immer wieder die Frage, wieviel der im Original dargestellten Insert- und Displaytexte übersetzt werden müssen. Aus Gründen der visuellen Konsistenz wurden neue Inserts zur Übersetzung von Displays notwendig. Außerdem stellt sich die Frage insbesondere bei Fülltexten. Sie dienen der Repräsentation von Text und der Erweiterung der Grenzen der fiktiv dargestellten Welt, sind allerdings mit hohem Übersetzungsaufwand bei minimaler Bedeutung für die Handlung verbunden.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lint-like program checkers are popular tools that ensure code quality by verifying compliance with best practices for a particular programming language. The proliferation of internal domain-specific languages and models, however, poses new challenges for such tools. Traditional program checkers produce many false positives and fail to accurately check constraints, best practices, common errors, possible optimizations and portability issues particular to domain-specific languages. We advocate the use of dedicated rules to check domain-specific practices. We demonstrate the implementation of domain-specific rules, the automatic fixing of violations, and their application to two case-studies: (1) Seaside defines several internal DSLs through a creative use of the syntax of the host language; and (2) Magritte adds meta-descriptions to existing code by means of special methods. Our empirical validation demonstrates that domain-specific program checking significantly improves code quality when compared with general purpose program checking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Written text is an important component in the process of knowledge acquisition and communication. Poorly written text fails to deliver clear ideas to the reader no matter how revolutionary and ground-breaking these ideas are. Providing text with good writing style is essential to transfer ideas smoothly. While we have sophisticated tools to check for stylistic problems in program code, we do not apply the same techniques for written text. In this paper we present TextLint, a rule-based tool to check for common style errors in natural language. TextLint provides a structural model of written text and an extensible rule-based checking mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hypertension is a powerful treatable risk factor for stroke. Reports of randomized controlled trials (RCTs) of antihypertensive drugs rightly concentrate on clinical outcomes, but control of blood pressure (BP) during follow-up is also important, particularly given that inconsistent control is associated with a high risk of stroke and that antihypertensive drug classes differ in this regard.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conservation strategies for long-lived vertebrates require accurate estimates of parameters relative to the populations' size, numbers of non-breeding individuals (the “cryptic” fraction of the population) and the age structure. Frequently, visual survey techniques are used to make these estimates but the accuracy of these approaches is questionable, mainly because of the existence of numerous potential biases. Here we compare data on population trends and age structure in a bearded vulture (Gypaetus barbatus) population from visual surveys performed at supplementary feeding stations with data derived from population matrix-modelling approximations. Our results suggest that visual surveys overestimate the number of immature (<2 years old) birds, whereas subadults (3–5 y.o.) and adults (>6 y.o.) were underestimated in comparison with the predictions of a population model using a stable-age distribution. In addition, we found that visual surveys did not provide conclusive information on true variations in the size of the focal population. Our results suggest that although long-term studies (i.e. population matrix modelling based on capture-recapture procedures) are a more time-consuming method, they provide more reliable and robust estimates of population parameters needed in designing and applying conservation strategies. The findings shown here are likely transferable to the management and conservation of other long-lived vertebrate populations that share similar life-history traits and ecological requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most indices for the assessment of wear of various aetiologies include the distinction between 'enamel still present' and 'dentine exposed' for grading. Since the visual diagnosis of exposed dentine has not yet been validated, the present study is a first attempt to investigate its accuracy and consistency. Sixty-one examiners (23 scientists, 18 university dentists and 20 dental students) were asked to diagnose 49 tooth areas with different grades of wear and to decide whether dentine was exposed (positive test) or not (negative test). Afterwards, the teeth were histologically evaluated. In 44 areas, dentine (also in all cases with minor wear) was exposed, and in 5 areas enamel was present. Overall sensitivity was 0.65, specificity 0.88 and the proportion of correct diagnoses was 0.67. The diagnosis 'dentine is exposed' was about 5 times as likely and the diagnosis 'dentine is not exposed' half as likely to come from an area with exposed dentine than from an enamel-covered area. The closeness of the visual diagnosis to the histological findings was only fair (kappa=0.27), no significant impact of professional experience was found. For inter- and intra-examiner agreement, kappa was 0.28 and 0.55, respectively. It was concluded that the diagnosis of exposed dentine is difficult.