980 resultados para Syntactic Projection


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Que penser d’une jeune artiste qui se présente tantôt en Méduse ou en beauté orientale, tantôt en bouddha, en haltérophile ou en Gretchen? Que penser de cet autoportrait dédoublé « en damier » qui fait écho au portrait d’une femme « en rayures », celui-ci également dédoublé? Comment décoder des photomontages — tous plus énigmatiques les uns que les autres — conçus en collaboration avec cette même femme « en rayures », et qui se retrouvent intercalés dans un texte intitulé « Aveux non avenus »? Que signifie « aimer », lorsque l’être aimé est notre alter ego? Cette histoire d’amour entre soi et la projection de soi peut-elle éviter l’abîme? Cet article propose de réfléchir sur la notion d’« aimer » chez Claude Cahun et Suzanne Malherbe alias Marcel Moore, en interrogeant le côté « narcissique » et autoréflexif que révèlent la plupart des autoportraits, l’autobiographie et les photomontages, d’une part, et le désir lesbien stigmatisé à l’époque comme un « faux masque », d’autre part. Dans un deuxième temps, il s’intéressera à ce couple symbiotique que forment l’auteure-photographe Cahun et la graphiste-peintre Moore, symbiose artistique qui leur permet de créer des oeuvres à leur image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is aimed at building an adaptable frame-based system for processing Dravidian languages. There are about 17 languages in this family and they are spoken by the people of South India.Karaka relations are one of the most important features of Indian languages. They are the semabtuco-syntactic relations between verbs and other related constituents in a sentence. The karaka relations and surface case endings are analyzed for meaning extraction. This approach is comparable with the borad class of case based grammars.The efficiency of this approach is put into test in two applications. One is machine translation and the other is a natural language interface (NLI) for information retrieval from databases. The system mainly consists of a morphological analyzer, local word grouper, a parser for the source language and a sentence generator for the target language. This work make contributios like, it gives an elegant account of the relation between vibhakthi and karaka roles in Dravidian languages. This mapping is elegant and compact. The same basic thing also explains simple and complex sentence in these languages. This suggests that the solution is not just ad hoc but has a deeper underlying unity. This methodology could be extended to other free word order languages. Since the frame designed for meaning representation is general, they are adaptable to other languages coming in this group and to other applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new procedure for the classification of lower case English language characters is presented in this work . The character image is binarised and the binary image is further grouped into sixteen smaller areas ,called Cells . Each cell is assigned a name depending upon the contour present in the cell and occupancy of the image contour in the cell. A data reduction procedure called Filtering is adopted to eliminate undesirable redundant information for reducing complexity during further processing steps . The filtered data is fed into a primitive extractor where extraction of primitives is done . Syntactic methods are employed for the classification of the character . A decision tree is used for the interaction of the various components in the scheme . 1ike the primitive extraction and character recognition. A character is recognized by the primitive by primitive construction of its description . Openended inventories are used for including variants of the characters and also adding new members to the general class . Computer implementation of the proposal is discussed at the end using handwritten character samples . Results are analyzed and suggestions for future studies are made. The advantages of the proposal are discussed in detail .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis entitled Inventory Management In Public Sector Electrical Industry In Kerala. Investigations were carried out on inventory management in public sector electrical industry in Kerala and suggest methods to improve their efficiency. Various aspects of inventory management, its scope and need in industry are detailed. The objectives of the present study concentrates to get an overall view of the system of inventory management, assess the positions and levels of inventory. It analyzes the inventory management policies and practices, the organizational set-up for materials by the electrical undertakings. The study examines the liquidity of the electrical undertakings as well as techniques of inventory management in the electrical industry in Kerala. Hypotheses state that the existing organizational systems and practices are inadequate to ensure efficient management of inventories in electrical industry. Introduction of scientific inventory techniques has a favourable effect on the workings of inventory departments. The financial performance of the public sector electrical undertakings is not at all satisfactory on account of the high raw material costs, heavy borrowings and huge interest burdens. The scope of this study is limited to the assessment of savings, in inventories of electrical products due to inventory management. The methodology of the study is to project the cost reduction of the inventory department on the basis of data collected and to validate this projection with the aid of analysis and survey. The limitations of the study is that the data obtained relate to the period 1989-90 and earlier and the current position is not available and uniform norms cannot be applied to evaluate different inventory management organisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Author identification is the problem of identifying the author of an anonymous text or text whose authorship is in doubt from a given set of authors. The works by different authors are strongly distinguished by quantifiable features of the text. This paper deals with the attempts made on identifying the most likely author of a text in Malayalam from a list of authors. Malayalam is a Dravidian language with agglutinative nature and not much successful tools have been developed to extract syntactic & semantic features of texts in this language. We have done a detailed study on the various stylometric features that can be used to form an authors profile and have found that the frequencies of word collocations can be used to clearly distinguish an author in a highly inflectious language such as Malayalam. In our work we try to extract the word level and character level features present in the text for characterizing the style of an author. Our first step was towards creating a profile for each of the candidate authors whose texts were available with us, first from word n-gram frequencies and then by using variable length character n-gram frequencies. Profiles of the set of authors under consideration thus formed, was then compared with the features extracted from anonymous text, to suggest the most likely author.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In natural languages with a high degree of word-order freedom syntactic phenomena like dependencies (subordinations) or valencies do not depend on the word-order (or on the individual positions of the individual words). This means that some permutations of sentences of these languages are in some (important) sense syntactically equivalent. Here we study this phenomenon in a formal way. Various types of j-monotonicity for restarting automata can serve as parameters for the degree of word-order freedom and for the complexity of word-order in sentences (languages). Here we combine two types of parameters on computations of restarting automata: 1. the degree of j-monotonicity, and 2. the number of rewrites per cycle. We study these notions formally in order to obtain an adequate tool for modelling and comparing formal descriptions of (natural) languages with different degrees of word-order freedom and word-order complexity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The collection of X chromosome insertions (PX) lethal lines, which was isolated from a screen for essential genes on the X chromosome, was characterized by means of cloning the insertion sites, mapping the sites within genomic DNA and determination of the associated reporter gene expresssion patterns. The established STS flanking the P element insertion sites were submitted to EMBL nucleotide databases and their in situ data together with the enhancer trap expression patterns have been deposited in the FlyView database. The characterized lines are now available to be used by the scientific community for a detailed analysis of the newly established lethal gene functions. One of the isolated genes on the X chromosome was the Drosophila gene Wnt5 (DWnt5). From two independent screens, one lethal and three homozygous viable alleles were recovered, allowing the identification of two distinct functions for DWnt5 in the fly. Observations on the developing nervous system of mutant embryos suggest that DWnt5 activity affects axon projection pattern. Elevated levels of DWNT5 activity in the midline cells of the central nervous system causes improper establishment and maintenance of the axonal pathways. Our analysis of the expression and mutant phenotype indicates that DWnt5 function in a process needed for proper organization of the nervous system. A second and novel function of DWnt5 is the control of the body size by regulation of the cell number rather than affecting the size of cells. Moreover, experimentally increased DWnt5 levels in a post-mitotic region of the eye imaginal disc causes abnormal cell cycle progression, resulting in additional ommatidia in the adult eye when compared to wild type. The increased cell number and the effects on the cell cycle after exposure to high DWNT5 levels is the result of a failure to downregulate cyclin B and therefore the unsuccessful establishment of a G1 arrest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Restarting automata can be seen as analytical variants of classical automata as well as of regulated rewriting systems. We study a measure for the degree of nondeterminism of (context-free) languages in terms of deterministic restarting automata that are (strongly) lexicalized. This measure is based on the number of auxiliary symbols (categories) used for recognizing a language as the projection of its characteristic language onto its input alphabet. This type of recognition is typical for analysis by reduction, a method used in linguistics for the creation and verification of formal descriptions of natural languages. Our main results establish a hierarchy of classes of context-free languages and two hierarchies of classes of non-context-free languages that are based on the expansion factor of a language.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently Itatani et al. [Nature 432, 876 (2004)] introduced the new concept of molecular orbital tomography, where high harmonic generation (HHG) is used to image electronic wave functions. We describe an alternative reconstruction form, using momentum instead of dipole matrix elements for the electron recombination step in HHG. We show that using this velocity-form reconstruction, one obtains better results than using the original length-form reconstruction. We provide numerical evidence for our claim that one has to resort to extremely short pulses to perform the reconstruction for an orbital with arbitrary symmetry. The numerical evidence is based on the exact solution of the time-dependent Schrödinger equation for 2D model systems to simulate the experiment. Furthermore we show that in the case of cylindrically symmetric orbitals, such as the N2 orbital that was reconstructed in the original work, one can obtain the full 3D wave function and not only a 2D projection of it. Vor kurzem führten Itatani et al. [Nature 432, 876 (2004)] das Konzept der Molelkülorbital-Tomographie ein. Hierbei wird die Erzeugung hoher Harmonischer verwendet, um Bilder von elektronischen Wellenfunktionen zu gewinnen. Wir beschreiben eine alternative Form der Rekonstruktion, die auf Impuls- statt Dipol-Matrixelementen für den Rekombinationsschritt bei der Erzeugung der Harmonischen basiert. Wir zeigen, dass diese "Geschwindigkeitsform" der Rekonstruktion bessere Ergebnisse als die ursprüngliche "Längenform" liefert. Wir zeigen numerische Beweise für unsere Behauptung, dass man zu extrem kurzen Laserpulsen gehen muss, um Orbitale mit beliebiger Symmetrie zu rekonstruieren. Diese Ergebnisse basieren auf der exakten Lösung der zeitabhängigen Schrödingergleichung für 2D-Modellsysteme. Wir zeigen ferner, dass für zylindersymmetrische Orbitale wie das N2-Orbital, welches in der oben zitierten Arbeit rekonstruiert wurde, das volle 3D-Orbital rekonstruiert werden kann, nicht nur seine 2D-Projektion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A program is presented for the construction of relativistic symmetry-adapted molecular basis functions. It is applicable to 36 finite double point groups. The algorithm, based on the projection operator method, automatically generates linearly independent basis sets. Time reversal invariance is included in the program, leading to additional selection rules in the non-relativistic limit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relativistic density functional theory is widely applied in molecular calculations with heavy atoms, where relativistic and correlation effects are on the same footing. Variational stability of the Dirac Hamiltonian is a very important field of research from the beginning of relativistic molecular calculations on, among efforts for accuracy, efficiency, and density functional formulation, etc. Approximations of one- or two-component methods and searching for suitable basis sets are two major means for good projection power against the negative continuum. The minimax two-component spinor linear combination of atomic orbitals (LCAO) is applied in the present work for both light and super-heavy one-electron systems, providing good approximations in the whole energy spectrum, being close to the benchmark minimax finite element method (FEM) values and without spurious and contaminated states, in contrast to the presence of these artifacts in the traditional four-component spinor LCAO. The variational stability assures that minimax LCAO is bounded from below. New balanced basis sets, kinetic and potential defect balanced (TVDB), following the minimax idea, are applied with the Dirac Hamiltonian. Its performance in the same super-heavy one-electron quasi-molecules shows also very good projection capability against variational collapse, as the minimax LCAO is taken as the best projection to compare with. The TVDB method has twice as many basis coefficients as four-component spinor LCAO, which becomes now linear and overcomes the disadvantage of great time-consumption in the minimax method. The calculation with both the TVDB method and the traditional LCAO method for the dimers with elements in group 11 of the periodic table investigates their difference. New bigger basis sets are constructed than in previous research, achieving high accuracy within the functionals involved. Their difference in total energy is much smaller than the basis incompleteness error, showing that the traditional four-spinor LCAO keeps enough projection power from the numerical atomic orbitals and is suitable in research on relativistic quantum chemistry. In scattering investigations for the same comparison purpose, the failure of the traditional LCAO method of providing a stable spectrum with increasing size of basis sets is contrasted to the TVDB method, which contains no spurious states already without pre-orthogonalization of basis sets. Keeping the same conditions including the accuracy of matrix elements shows that the variational instability prevails over the linear dependence of the basis sets. The success of the TVDB method manifests its capability not only in relativistic quantum chemistry but also for scattering and under the influence of strong external electronic and magnetic fields. The good accuracy in total energy with large basis sets and the good projection property encourage wider research on different molecules, with better functionals, and on small effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The method of Least Squares is due to Carl Friedrich Gauss. The Gram-Schmidt orthogonalization method is of much younger date. A method for solving Least Squares Problems is developed which automatically results in the appearance of the Gram-Schmidt orthogonalizers. Given these orthogonalizers an induction-proof is available for solving Least Squares Problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit entstand während meiner Zeit als wissenschaftlicher Mitarbeiter im Fachgebiet Technische Informatik an der Universität Kassel. Im Rahmen dieser Arbeit werden der Entwurf und die Implementierung eines Cluster-basierten verteilten Szenengraphen gezeigt. Bei der Implementierung des verteilten Szenengraphen wurde von der Entwicklung eines eigenen Szenengraphen abgesehen. Stattdessen wurde ein bereits vorhandener Szenengraph namens OpenSceneGraph als Basis für die Entwicklung des verteilten Szenengraphen verwendet. Im Rahmen dieser Arbeit wurde eine Clusterunterstützung in den vorliegenden OpenSceneGraph integriert. Bei der Erweiterung des OpenSceneGraphs wurde besonders darauf geachtet den vorliegenden Szenengraphen möglichst nicht zu verändern. Zusätzlich wurde nach Möglichkeit auf die Verwendung und Integration externer Clusterbasierten Softwarepakete verzichtet. Für die Verteilung des OpenSceneGraphs wurde auf Basis von Sockets eine eigene Kommunikationsschicht entwickelt und in den OpenSceneGraph integriert. Diese Kommunikationsschicht wurde verwendet um Sort-First- und Sort-Last-basierte Visualisierung dem OpenSceneGraph zur Verfügung zu stellen. Durch die Erweiterung des OpenScenGraphs um die Cluster-Unterstützung wurde eine Ansteuerung beliebiger Projektionssysteme wie z.B. einer CAVE ermöglicht. Für die Ansteuerung einer CAVE wurden mittels VRPN diverse Eingabegeräte sowie das Tracking in den OpenSceneGraph integriert. Durch die Anbindung der Geräte über VRPN können diese Eingabegeräte auch bei den anderen Cluster-Betriebsarten wie z.B. einer segmentierten Anzeige verwendet werden. Die Verteilung der Daten auf den Cluster wurde von dem Kern des OpenSceneGraphs separat gehalten. Damit kann eine beliebige OpenSceneGraph-basierte Anwendung jederzeit und ohne aufwendige Modifikationen auf einem Cluster ausgeführt werden. Dadurch ist der Anwender in seiner Applikationsentwicklung nicht behindert worden und muss nicht zwischen Cluster-basierten und Standalone-Anwendungen unterscheiden.