872 resultados para Correspondences, Doctrine of.
Resumo:
There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.
Resumo:
Nature exists. Humans exist. The behaviour of one impacts upon the other. The behaviour of humans is governed by the artificial contrivance described as the law. While the law can in this way control the behaviour of humans and the impact that human behaviour has on nature, the behaviour of nature is governed – if at all- in accordance with nature’s own sets of values which are quintessentially a matter for nature. The relationship between nature and humans may be the object of rules of law, but traditional legal doctrine dictates that humans but not nature are the subjects of the rules of law. The jurisprudence of the earth – it would appear – seeks to equalise in the eyes of the law nature as part of the global environment and humans as part of the global environment. How might this be done?
Resumo:
This paper presents an online, unsupervised training algorithm enabling vision-based place recognition across a wide range of changing environmental conditions such as those caused by weather, seasons, and day-night cycles. The technique applies principal component analysis to distinguish between aspects of a location’s appearance that are condition-dependent and those that are condition-invariant. Removing the dimensions associated with environmental conditions produces condition-invariant images that can be used by appearance-based place recognition methods. This approach has a unique benefit – it requires training images from only one type of environmental condition, unlike existing data-driven methods that require training images with labelled frame correspondences from two or more environmental conditions. The method is applied to two benchmark variable condition datasets. Performance is equivalent or superior to the current state of the art despite the lesser training requirements, and is demonstrated to generalise to previously unseen locations.
Resumo:
We introduce a framework for population analysis of white matter tracts based on diffusion-weighted images of the brain. The framework enables extraction of fibers from high angular resolution diffusion images (HARDI); clustering of the fibers based partly on prior knowledge from an atlas; representation of the fiber bundles compactly using a path following points of highest density (maximum density path; MDP); and registration of these paths together using geodesic curve matching to find local correspondences across a population. We demonstrate our method on 4-Tesla HARDI scans from 565 young adults to compute localized statistics across 50 white matter tracts based on fractional anisotropy (FA). Experimental results show increased sensitivity in the determination of genetic influences on principal fiber tracts compared to the tract-based spatial statistics (TBSS) method. Our results show that the MDP representation reveals important parts of the white matter structure and considerably reduces the dimensionality over comparable fiber matching approaches.
Resumo:
We propose in this paper a new method for the mapping of hippocampal (HC) surfaces to establish correspondences between points on HC surfaces and enable localized HC shape analysis. A novel geometric feature, the intrinsic shape context, is defined to capture the global characteristics of the HC shapes. Based on this intrinsic feature, an automatic algorithm is developed to detect a set of landmark curves that are stable across population. The direct map between a source and target HC surface is then solved as the minimizer of a harmonic energy function defined on the source surface with landmark constraints. For numerical solutions, we compute the map with the approach of solving partial differential equations on implicit surfaces. The direct mapping method has the following properties: (1) it has the advantage of being automatic; (2) it is invariant to the pose of HC shapes. In our experiments, we apply the direct mapping method to study temporal changes of HC asymmetry in Alzheimer's disease (AD) using HC surfaces from 12 AD patients and 14 normal controls. Our results show that the AD group has a different trend in temporal changes of HC asymmetry than the group of normal controls. We also demonstrate the flexibility of the direct mapping method by applying it to construct spherical maps of HC surfaces. Spherical harmonics (SPHARM) analysis is then applied and it confirms our results on temporal changes of HC asymmetry in AD.
Resumo:
Fair Use Week has celebrated the evolution and development of the defence of fair use under copyright law in the United States. As Krista Cox noted, ‘As a flexible doctrine, fair use can adapt to evolving technologies and new situations that may arise, and its long history demonstrates its importance in promoting access to information, future innovation, and creativity.’ While the defence of fair use has flourished in the United States, the adoption of the defence of fair use in other jurisdictions has often been stymied. Professor Peter Jaszi has reflected: ‘We can only wonder (with some bemusement) why some of our most important foreign competitors, like the European Union, haven’t figured out that fair use is, to a great extent, the “secret sauce” of U.S. cultural competitiveness.’ Jurisdictions such as Australia have been at a dismal disadvantage, because they lack the freedoms and flexibilities of the defence of fair use.
Resumo:
Autism and Asperger syndrome (AS) are neurodevelopmental disorders characterised by deficient social and communication skills, as well as restricted, repetitive patterns of behaviour. The language development in individuals with autism is significantly delayed and deficient, whereas in individuals with AS, the structural aspects of language develop quite normally. Both groups, however, have semantic-pragmatic language deficits. The present thesis investigated auditory processing in individuals with autism and AS. In particular, the discrimination of and orienting to speech and non-speech sounds was studied, as well as the abstraction of invariant sound features from speech-sound input. Altogether five studies were conducted with auditory event-related brain potentials (ERP); two studies also included a behavioural sound-identification task. In three studies, the subjects were children with autism, in one study children with AS, and in one study adults with AS. In children with autism, even the early stages of sound encoding were deficient. In addition, these children had altered sound-discrimination processes characterised by enhanced spectral but deficient temporal discrimination. The enhanced pitch discrimination may partly explain the auditory hypersensitivity common in autism, and it may compromise the filtering of relevant auditory information from irrelevant information. Indeed, it was found that when sound discrimination required abstracting invariant features from varying input, children with autism maintained their superiority in pitch processing, but lost it in vowel processing. Finally, involuntary orienting to sound changes was deficient in children with autism in particular with respect to speech sounds. This finding is in agreement with previous studies on autism suggesting deficits in orienting to socially relevant stimuli. In contrast to children with autism, the early stages of sound encoding were fairly unimpaired in children with AS. However, sound discrimination and orienting were rather similarly altered in these children as in those with autism, suggesting correspondences in the auditory phenotype in these two disorders which belong to the same continuum. Unlike children with AS, adults with AS showed enhanced processing of duration changes, suggesting developmental changes in auditory processing in this disorder.
Resumo:
The aim of this paper is to present the evolution of the Francovich doctrine within the European legal order. The first part deals with the gradual development of the ECJ's case law on State liability in damages for breach of EC law. Starting from the seminal Francovich and Brasserie du Pêcheur, the clarification of the criteria set by the Court is attempted with reference to subsequent case law, whereas issues concerning the extent and form of the compensation owned are also mentioned. The second part concerns one of the more recent developments in the field, namely State liability for breaches of Community law attributed to national judiciary. The Court's ruling in Köbler is examined in connection with two other recent judgments, namely Commission v. Italy of 2003 and Kühne & Heitz, as an attempt of the ECJ to reframe its relationships with national supreme courts and appropriate for itself the position of the Supreme Court in the European legal order. The implications on State liability claims by the ruling in Commission v. France of 1997 constitute the theme of the third part, where it is submitted that Member States can also be held liable for disregard of Community law by private individuals within their respected territories. To this extent, Schmidberger is viewed as a manifestation of this opinion, with fundamental rights acquiring a new dimension, being invoked by the States, contra the individuals as a shield to liability claims. Finally, the third part examines the relationship between the Francovich doctrine and the principle of legal certainty and concludes that the solutions employed by the ECJ have been both predictable and acceptable by the national legal orders. Keywords: State liability, damages, Francovich, Köbler, Schmidberger
Resumo:
The purpose of this study is to analyse the development and understanding of the idea of consensus in bilateral dialogues among Anglicans, Lutherans and Roman Catholics. The source material consists of representative dialogue documents from the international, regional and national dialogues from the 1960s until 2006. In general, the dialogue documents argue for agreement/consensus based on commonality or compatibility. Each of the three dialogue processes has specific characteristics and formulates its argument in a unique way. The Lutheran-Roman Catholic dialogue has a particular interest in hermeneutical questions. In the early phases, the documents endeavoured to describe the interpretative principles that would allow the churches to together proclaim the Gospel and to identify the foundation on which the agreement in the church is based. This investigation ended up proposing a notion of basic consensus , which later developed into a form of consensus that seeks to embrace, not to dismiss differences (so-called differentiated consensus ). The Lutheran-Roman Catholic agreement is based on a perspectival understanding of doctrine. The Anglican-Roman Catholic dialogue emphasises the correctness of interpretations. The documents consciously look towards a common future , not the separated past. The dialogue s primary interpretative concept is koinonia. The texts develop a hermeneutics of authoritative teaching that has been described as the rule of communion . The Anglican-Lutheran dialogue is characterised by an instrumental understanding of doctrine. Doctrinal agreement is facilitated by the ideas of coherence, continuity and substantial emphasis in doctrine. The Anglican-Lutheran dialogue proposes a form of sufficient consensus that considers a wide set of doctrinal statements and liturgical practices to determine whether an agreement has been reached to the degree that, although not complete , is sufficient for concrete steps towards unity. Chapter V discusses the current challenges of consensus as an ecumenically viable concept. In this part, I argue that the acceptability of consensus as an ecumenical goal is based not only the understanding of the church but more importantly on the understanding of the nature and function of the doctrine. The understanding of doctrine has undergone significant changes during the time of the ecumenical dialogues. The major shift has been from a modern paradigm towards a postmodern paradigm. I conclude with proposals towards a way to construct a form of consensus that would survive philosophical criticism, would be theologically valid and ecumenically acceptable.
Resumo:
This study focuses on the theory of individual rights that the German theologian Conrad Summenhart (1455-1502) explicated in his massive work Opus septipartitum de contractibus pro foro conscientiae et theologico. The central question to be studied is: How does Summenhart understand the concept of an individual right and its immediate implications? The basic premiss of this study is that in Opus septipartitum Summenhart composed a comprehensive theory of individual rights as a contribution to the on-going medieval discourse on rights. With this rationale, the first part of the study concentrates on earlier discussions on rights as the background for Summenhart s theory. Special attention is paid to language in which right was defined in terms of power . In the fourteenth century writers like Hervaeus Natalis and William Ockham maintained that right signifies power by which the right-holder can to use material things licitly. It will also be shown how the attempts to describe what is meant by the term right became more specified and cultivated. Gerson followed the implications that the term power had in natural philosophy and attributed rights to animals and other creatures. To secure right as a normative concept, Gerson utilized the ancient ius suum cuique-principle of justice and introduced a definition in which right was seen as derived from justice. The latter part of this study makes effort to reconstructing Summenhart s theory of individual rights in three sections. The first section clarifies Summenhart s discussion of the right of the individual or the concept of an individual right. Summenhart specified Gerson s description of right as power, taking further use of the language of natural philosophy. In this respect, Summenhart s theory managed to bring an end to a particular continuity of thought that was centered upon a view in which right was understood to signify power to licit action. Perhaps the most significant feature of Summenhart s discussion was the way he explicated the implication of liberty that was present in Gerson s language of rights. Summenhart assimilated libertas with the self-mastery or dominion that in the economic context of discussion took the form of (a moderate) self-ownership. Summenhart discussion also introduced two apparent extensions to Gerson s terminology. First, Summenhart classified right as relation, and second, he equated right with dominion. It is distinctive of Summenhart s view that he took action as the primary determinant of right: Everyone has as much rights or dominion in regard to a thing, as much actions it is licit for him to exercise in regard to the thing. The second section elaborates Summenhart s discussion of the species dominion, which delivered an answer to the question of what kind of rights exist, and clarified thereby the implications of the concept of an individual right. The central feature in Summenhart s discussion was his conscious effort to systematize Gerson s language by combining classifications of dominion into a coherent whole. In this respect, his treatement of the natural dominion is emblematic. Summenhart constructed the concept of natural dominion by making use of the concepts of foundation (founded on a natural gift) and law (according to the natural law). In defining natural dominion as dominion founded on a natural gift, Summenhart attributed natural dominion to animals and even to heavenly bodies. In discussing man s natural dominion, Summenhart pointed out that the natural dominion is not sufficiently identified by its foundation, but requires further specification, which Summenhart finds in the idea that natural dominion is appropriate to the subject according to the natural law. This characterization lead him to treat God s dominion as natural dominion. Partly, this was due to Summenhart s specific understanding of the natural law, which made reasonableness as the primary criterion for the natural dominion at the expense of any metaphysical considerations. The third section clarifies Summenhart s discussion of the property rights defined by the positive human law. By delivering an account on juridical property rights Summenhart connected his philosophical and theological theory on rights to the juridical language of his times, and demonstrated that his own language of rights was compatible with current juridical terminology. Summenhart prepared his discussion of property rights with an account of the justification for private property, which gave private property a direct and strong natural law-based justification. Summenhart s discussion of the four property rights usus, usufructus, proprietas, and possession aimed at delivering a detailed report of the usage of these concepts in juridical discourse. His discussion was characterized by extensive use of the juridical source texts, which was more direct and verbal the more his discussion became entangled with the details of juridical doctrine. At the same time he promoted his own language on rights, especially by applying the idea of right as relation. He also showed recognizable effort towards systematizing juridical language related to property rights.
Resumo:
In this paper, we present a new feature-based approach for mosaicing of camera-captured document images. A novel block-based scheme is employed to ensure that corners can be reliably detected over a wide range of images. 2-D discrete cosine transform is computed for image blocks defined around each of the detected corners and a small subset of the coefficients is used as a feature vector A 2-pass feature matching is performed to establish point correspondences from which the homography relating the input images could be computed. The algorithm is tested on a number of complex document images casually taken from a hand-held camera yielding convincing results.
Resumo:
The aim of this thesis was to examine the understanding of community in George Lindbeck s The Nature of Doctrine. Intrinsic to this question was also examining how Lindbeck understands the relation between the text and the world which both meet in a Christian community. Thirdly this study also aimed at understanding what the persuasiveness of this understanding depends on. The method applied for this task was systematic analysis. The study was conducted by first providing an orientation into the nontheological substance of the ND which was assumed useful with respect to the aim of this study. The study then went on to explore Lindbeck in his own context of postliberal theology in order to see how the ND was received. It also attempted to provide a picture of how the ND relates to Lindbeck as a theologian. The third chapter was a descriptive analysis into the cultural-linguistic perspective, which is understood as being directly proportional to his understanding of community. The fourth chapter was an analysis into how the cultural-linguistic perspective sees the relation between the text and the world. When religion is understood from a cultural-linguistic perspective, it presents itself as a cultural-linguistic entity, which Lindbeck understands as a comprehensive interpretive scheme which structures human experience and understanding of oneself and the world in which one lives. When one exists in this entity, it is the entity which shapes the subjectivities of all those who are at home in this entity which makes participation in the life of a cultural linguistic entity a condition for understanding it. Religion is above all an external word that moulds and shapes our religious existence and experience. Understanding faith then as coming from hearing, is something that correlates with the cultural-linguistic depiction of reality. Religion informs us of a religious reality, it does not originate in any way from ourselves. This externality linked to the axiomatic nature of religion is also something that distinguishes Lindbeck sharply from liberalist tendencies, which understand religion as ultimately expressing the prereflective depths of the inner self. Language is the central analogy to understanding the medium in which one moves when inhabiting a cultural-linguistic system because language is the transmitting medium in which the cultural-linguistic system is embodied. The realism entailed in Lindbeck s understanding of a community is that we are fundamentally on the receiving end when it comes to our identities whether cultural or religious. We always witness to something. Its persuasiveness rests on the fact that we never exist in an unpersuaded reality. The language of Christ is a self-sustaining and irreducible cultural-linguistic entity, which is ontologically founded upon Christ. It transmits the reality of a new being. The basic relation to the world for a Christian is that of witnessing salvation in Christ: witnessing Christ as the home of hearing the message of salvation, which is the God-willed way. Following this logic, the relation of the world and the text is one of relating to the world from the text, i.e. In Christ through the word (text) for the world, because it assumes it s logic from the way Christ ontologically relates to us.
Resumo:
The (He3, n) reactions on B11, N15, O16, and O18 targets have been studied using a pulsed-beam time-of-flight spectrometer. Special emphasis was placed upon the determination of the excitation energies and properties of states with T = 1 (in Ne18), T = 3/2 (in N13 and F17) and T = 2 (in Ne20). The identification of the T = 3/2 and T = 2 levels is based on the structure of these states as revealed by intensities and shapes of angular distributions. The reactions are interpreted in terms of double stripping theory. Angular distributions have been compared with plane and distorted wave stripping theories. Results for the four reactions are summarized below:
1) O16 (He3, n). The reaction has been studied at incident energies up to 13.5 MeV and two previously unreported levels in Ne18 were observed at Ex = 4.55 ± .015 MeV (Γ = 70 ± 30 keV) and Ex = 5.14 ± .018 MeV (Γ = 100 ± 40 keV).
2) B11 (He3, n). The reaction has been studied at incident energies up to 13.5 MeV. Three T = 3/2 levels in N13 have been identified at Ex = 15.068 ± .008 MeV (Γ ˂ 15 keV), Ex = 18.44 ± .04, and Ex 18.98 ± .02 MeV (Γ = 40 ± 20 keV).
3) N15 (He3, n). The reaction has been studied at incident energies up to 11.88 MeV. T = 3/2 levels in F17 have been identified at Ex = 11.195 ± .007 MeV (Γ ˂ 20 keV), Ex = 12.540 ± .010 MeV (Γ ˂ 25 keV), and Ex = 13.095 ± .009 MeV (Γ ˂ 25 keV).
4) O18 (He3, n). The reaction has been studied at incident energies up to 9.0 MeV. The excitation energy of the lowest T = 2 level in Ne20 has been found to be 16.730 ± .006 MeV (Γ ˂ 20 keV).
Angular distributions of the transitions leading to the above higher isospin states are well described by double stripping theory. Analog correspondences are established by comparing the present results with recent studies (t, p) and (He3, p) reactions on the same targets.
Resumo:
We present a matching framework to find robust correspondences between image features by considering the spatial information between them. To achieve this, we define spatial constraints on the relative orientation and change in scale between pairs of features. A pairwise similarity score, which measures the similarity of features based on these spatial constraints, is considered. The pairwise similarity scores for all pairs of candidate correspondences are then accumulated in a 2-D similarity space. Robust correspondences can be found by searching for clusters in the similarity space, since actual correspondences are expected to form clusters that satisfy similar spatial constraints in this space. As it is difficult to achieve reliable and consistent estimates of scale and orientation, an additional contribution is that these parameters do not need to be determined at the interest point detection stage, which differs from conventional methods. Polar matching of dual-tree complex wavelet transform features is used, since it fits naturally into the framework with the defined spatial constraints. Our tests show that the proposed framework is capable of producing robust correspondences with higher correspondence ratios and reasonable computational efficiency, compared to other well-known algorithms. © 1992-2012 IEEE.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.