996 resultados para Relative complexity
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. All types of restarting automata considered in the literature up to now accept at least the deterministic context-free languages. Here we introduce and study a new type of restarting automaton, the so-called t-RL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size one only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. Here we study the gap-complexity of these automata. The membership problem for a language that is accepted by a t-RL-automaton with a bounded number of gaps can be solved in polynomial time. On the other hand, t-RL-automata with an unbounded number of gaps accept NP-complete languages.
Resumo:
Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. Here we study a new type of restarting automaton, the so-called t-sRL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size 1 only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. We focus on the descriptional complexity of these automata, establishing two complexity measures that are both based on the description of t-sRL-automata in terms of so-called meta-instructions. We present some hierarchy results as well as a non-recursive trade-off between deterministic 2-sRL-automata and finite-state acceptors.
Resumo:
Let G be finite group and K a number field or a p-adic field with ring of integers O_K. In the first part of the manuscript we present an algorithm that computes the relative algebraic K-group K_0(O_K[G],K) as an abstract abelian group. We solve the discrete logarithm problem, both in K_0(O_K[G],K) and the locally free class group cl(O_K[G]). All algorithms have been implemented in MAGMA for the case K = \IQ. In the second part of the manuscript we prove formulae for the torsion subgroup of K_0(\IZ[G],\IQ) for large classes of dihedral and quaternion groups.
Resumo:
This work focuses on the analysis of the influence of environment on the relative biological effectiveness (RBE) of carbon ions on molecular level. Due to the high relevance of RBE for medical applications, such as tumor therapy, and radiation protection in space, DNA damages have been investigated in order to understand the biological efficiency of heavy ion radiation. The contribution of this study to the radiobiology research consists in the analysis of plasmid DNA damages induced by carbon ion radiation in biochemical buffer environments, as well as in the calculation of the RBE of carbon ions on DNA level by mean of scanning force microscopy (SFM). In order to study the DNA damages, besides the common electrophoresis method, a new approach has been developed by using SFM. The latter method allows direct visualisation and measurement of individual DNA fragments with an accuracy of several nanometres. In addition, comparison of the results obtained by SFM and agarose gel electrophoresis methods has been performed in the present study. Sparsely ionising radiation, such as X-rays, and densely ionising radiation, such as carbon ions, have been used to irradiate plasmid DNA in trishydroxymethylaminomethane (Tris buffer) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES buffer) environments. These buffer environments exhibit different scavenging capacities for hydroxyl radical (HO0), which is produced by ionisation of water and plays the major role in the indirect DNA damage processes. Fragment distributions have been measured by SFM over a large length range, and as expected, a significantly higher degree of DNA damages was observed for increasing dose. Also a higher amount of double-strand breaks (DSBs) was observed after irradiation with carbon ions compared to X-ray irradiation. The results obtained from SFM measurements show that both types of radiation induce multiple fragmentation of the plasmid DNA in the dose range from D = 250 Gy to D = 1500 Gy. Using Tris environments at two different concentrations, a decrease of the relative biological effectiveness with the rise of Tris concentration was observed. This demonstrates the radioprotective behavior of the Tris buffer solution. In contrast, a lower scavenging capacity for all other free radicals and ions, produced by the ionisation of water, was registered in the case of HEPES buffer compared to Tris solution. This is reflected in the higher RBE values deduced from SFM and gel electrophoresis measurements after irradiation of the plasmid DNA in 20 mM HEPES environment compared to 92 mM Tris solution. These results show that HEPES and Tris environments play a major role on preventing the indirect DNA damages induced by ionising radiation and on the relative biological effectiveness of heavy ion radiation. In general, the RBE calculated from the SFM measurements presents higher values compared to gel electrophoresis data, for plasmids irradiated in all environments. Using a large set of data, obtained from the SFM measurements, it was possible to calculate the survive rate over a larger range, from 88% to 98%, while for gel electrophoresis measurements the survive rates have been calculated only for values between 96% and 99%. While the gel electrophoresis measurements provide information only about the percentage of plasmids DNA that suffered a single DSB, SFM can count the small plasmid fragments produced by multiple DSBs induced in a single plasmid. Consequently, SFM generates more detailed information regarding the amount of the induced DSBs compared to gel electrophoresis, and therefore, RBE can be calculated with more accuracy. Thus, SFM has been proven to be a more precise method to characterize on molecular level the DNA damage induced by ionizing radiations.
Resumo:
This thesis concerns with the main aspects of medical trace molecules detection by means of intracavity laser absorption spectroscopy (ICLAS), namely with the equirements for highly sensitive, highly selective, low price, and compact size sensor. A novel two modes semiconductor laser sensor is demonstrated. Its operation principle is based on the competition between these two modes. The sensor sensitivity is improved when the sample is placed inside the two modes laser cavity, and the competition between the two modes exists. The effects of the mode competition in ICLAS are discussed theoretically and experimentally. The sensor selectivity is enhanced using external cavity diode laser (ECDL) configuration, where the tuning range only depends on the external cavity configuration. In order to considerably reduce the sensor cost, relative intensity noise (RIN) is chosen for monitoring the intensity ratio of the two modes. RIN is found to be an excellent indicator for the two modes intensity ratio variations which strongly supports the sensor methodology. On the other hand, it has been found that, wavelength tuning has no effect on the RIN spectrum which is very beneficial for the proposed detection principle. In order to use the sensor for medical applications, the absorption line of an anesthetic sample, propofol, is measured. Propofol has been dissolved in various solvents. RIN has been chosen to monitor the sensor response. From the measured spectra, the sensor sensitivity enhancement factor is found to be of the order of 10^(3) times of the conventional laser spectroscopy.
Resumo:
Die Automobilindustrie reagiert mit Modularisierungsstrategien auf die zunehmende Produktkomplexität, getrieben durch die wachsenden Individualisierungsanforde-rungen auf der Kundenseite und der Modellpolitik mit neuen Fahrzeuganläufen. Die Hersteller verlagern die Materialbereitstellungskomplexität durch Outsourcing an die nächste Zulieferebene, den First Tier Suppliern, die seit Beginn der 90er Jahre zunehmend in Zulieferparks in unmittelbarer Werknähe integriert werden. Typische Merkmale eines klassischen Zulieferparks sind: Bereitstellung einer Halleninfrastruktur mit Infrastrukturdienstleistungen, Anlieferung der Teileumfänge im JIS-Verfahren (Just-in-Sequence = reihenfolgegenaue Synchronisation), lokale Wertschöpfung (Vormontagen, Sequenzierung) des Zulieferers, Vertragsbindung der First Tier Zulieferer für die Dauer eines Produktlebenszyklus und Einbindung eines Logistikdienstleisters. Teilweise werden zur Finanzierung Förderprojekte des öffent-lichen Sektors initiiert. Bisher fehlte eine wissenschaftliche Bearbeitung dieses Themas "Zulieferpark". In der Arbeit werden die in Europa entstandenen Zulieferparks näher untersucht, um Vor- und Nachteile dieses Logistikkonzeptes zu dokumentieren und Entwicklungs-trends aufzuzeigen. Abgeleitet aus diesen Erkenntnissen werden Optimierungs-ansätze aufgezeigt und konkrete Entwicklungspfade zur Verbesserung der Chancen-Risikoposition der Hauptakteure Automobilhersteller, Zulieferer und Logistikdienst-leister beschrieben. Die Arbeit gliedert sich in vier Haupteile, einer differenzierten Beschreibung der Ausgangssituation und den Entwicklungstrends in der Automobilindustrie, dem Vorgehensmodell, der Dokumentation der Analyseergebnisse und der Bewertung von Zulieferparkmodellen. Im Rahmen der Ergebnisdokumentation des Analyseteils werden vier Zulieferparkmodelle in detaillierten Fallstudien anschaulich dargestellt. Zur Erarbeitung der Analyseergebnisse wurde eine Befragung der Hauptakteure mittels strukturierten Fragebögen durchgeführt. Zur Erhebung von Branchentrends und zur relativen Bewertung der Parkmodelle wurden zusätzlich Experten befragt. Zur Segmentierung der Zulieferparklandschaft wurde die Methode der Netzwerk-analyse eingesetzt. Die relative Bewertung der Nutzenposition basiert auf der Nutzwertanalyse. Als Ergebnisse der Arbeit liegen vor: · Umfassende Analyse der Zulieferparklandschaft in Europa, · Segmentierung der Parks in Zulieferparkmodelle, Optimierungsansätze zur Verbesserung einer Win-Win-Situation der beteiligten Hauptakteure, · Relative Nutzenbewertung der Zulieferparkmodelle, · Entwicklungspfade für klassische Zulieferparks.
Resumo:
This paper introduces a framework that supports users to implement enterprise modelling within collaborative companies. These enterprise models are the basis for a holistic interoperability measurement and management methodology which will be presented in the second part of the paper. The discipline of enterprise modelling aims at capturing all dimensions of an enterprise in a simplified model. Thus enterprise models are the appropriate basis for managing collaborative enterprise as they reduce the complexity of interoperability problems. Therefore, a first objective of this paper is to present an approach that enables companies to get the most effect out of enterprise modelling in a collaborative environment based on the maturity of their organisation relative to modelling. Within this first step, the user will get recommendations e.g. for the correct modelling language as well as the right level of detail.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit den Einflüssen visuell wahrgenommener Bewegungsmerkmale auf die Handlungssteuerung eines Beobachters. Im speziellen geht es darum, wie die Bewegungsrichtung und die Bewegungsgeschwindigkeit als aufgabenirrelevante Reize die Ausführung von motorischen Reaktionen auf Farbreize beeinflussen und dabei schnellere bzw. verzögerte Reaktionszeiten bewirken. Bisherige Studien dazu waren auf lineare Bewegungen (von rechts nach links und umgekehrt) und sehr einfache Reizumgebungen (Bewegungen einfacher geometrischer Symbole, Punktwolken, Lichtpunktläufer etc.) begrenzt (z.B. Ehrenstein, 1994; Bosbach, 2004, Wittfoth, Buck, Fahle & Herrmann, 2006). In der vorliegenden Dissertation wurde die Gültigkeit dieser Befunde für Dreh- und Tiefenbewegungen sowie komplexe Bewegungsformen (menschliche Bewegungsabläufe im Sport) erweitert, theoretisch aufgearbeitet sowie in einer Serie von sechs Reaktionszeitexperimenten mittels Simon-Paradigma empirisch überprüft. Allen Experimenten war gemeinsam, dass Versuchspersonen an einem Computermonitor auf einen Farbwechsel innerhalb des dynamischen visuellen Reizes durch einen Tastendruck (links, rechts, proximal oder distal positionierte Taste) reagieren sollten, wobei die Geschwindigkeit und die Richtung der Bewegungen für die Reaktionen irrelevant waren. Zum Einfluss von Drehbewegungen bei geometrischen Symbolen (Exp. 1 und 1a) sowie bei menschlichen Drehbewegungen (Exp. 2) zeigen die Ergebnisse, dass Probanden signifikant schneller reagieren, wenn die Richtungsinformationen einer Drehbewegung kompatibel zu den räumlichen Merkmalen der geforderten Tastenreaktion sind. Der Komplexitätsgrad des visuellen Ereignisses spielt dabei keine Rolle. Für die kognitive Verarbeitung des Bewegungsreizes stellt nicht der Drehsinn, sondern die relative Bewegungsrichtung oberhalb und unterhalb der Drehachse das entscheidende räumliche Kriterium dar. Zum Einfluss räumlicher Tiefenbewegungen einer Kugel (Exp. 3) und einer gehenden Person (Exp. 4) belegen unsere Befunde, dass Probanden signifikant schneller reagieren, wenn sich der Reiz auf den Beobachter zu bewegt und ein proximaler gegenüber einem distalen Tastendruck gefordert ist sowie umgekehrt. Auch hier spielt der Komplexitätsgrad des visuellen Ereignisses keine Rolle. In beiden Experimenten führt die Wahrnehmung der Bewegungsrichtung zu einer Handlungsinduktion, die im kompatiblen Fall eine schnelle und im inkompatiblen Fall eine verzögerte Handlungsausführung bewirkt. In den Experimenten 5 und 6 wurden die Einflüsse von wahrgenommenen menschlichen Laufbewegungen (freies Laufen vs. Laufbandlaufen) untersucht, die mit und ohne eine Positionsveränderung erfolgten. Dabei zeigte sich, dass unabhängig von der Positionsveränderung die Laufgeschwindigkeit zu keiner Modulation des richtungsbasierten Simon Effekts führt. Zusammenfassend lassen sich die Studienergebnisse gut in effektbasierte Konzepte zur Handlungssteuerung (z.B. die Theorie der Ereigniskodierung von Hommel et al., 2001) einordnen. Weitere Untersuchungen sind nötig, um diese Ergebnisse auf großmotorische Reaktionen und Displays, die stärker an visuell wahrnehmbaren Ereignissen des Sports angelehnt sind, zu übertragen.
Resumo:
This thesis attempts to quantify the amount of information needed to learn certain tasks. The tasks chosen vary from learning functions in a Sobolev space using radial basis function networks to learning grammars in the principles and parameters framework of modern linguistic theory. These problems are analyzed from the perspective of computational learning theory and certain unifying perspectives emerge.
Resumo:
Object recognition in the visual cortex is based on a hierarchical architecture, in which specialized brain regions along the ventral pathway extract object features of increasing levels of complexity, accompanied by greater invariance in stimulus size, position, and orientation. Recent theoretical studies postulate a non-linear pooling function, such as the maximum (MAX) operation could be fundamental in achieving such invariance. In this paper, we are concerned with neurally plausible mechanisms that may be involved in realizing the MAX operation. Four canonical circuits are proposed, each based on neural mechanisms that have been previously discussed in the context of cortical processing. Through simulations and mathematical analysis, we examine the relative performance and robustness of these mechanisms. We derive experimentally verifiable predictions for each circuit and discuss their respective physiological considerations.
Resumo:
The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.
Resumo:
Due to a dramatic reduction in defense procurement, the benchmark for developing new defense systems today is performance at an affordable cost. In an attempt to encircle a more holistic perspective of value, lifecycle value has evolved as a concept within the Lean Aerospace Initiative, LAI. The implication of this is development of products incorporating lifecycle and long-term focus instead of a shortsighted cost cutting focus. The interest to reduce total cost of ownership while still improving performance, availability, and sustainability, other dimensions taken into account within the lifecycle value approach, falls well within this context. Several factors prevent enterprises from having a holistic perspective during product development. Some important aspects are increased complexity of the products and significant technological uncertainty. The combination of complexity in system design and the limits of individual human comprehension typically prevent a best value solution to be envisioned. The purpose of this research was to examine relative contributions in product development and determine factors that significantly promote abilities to consider and achieve lifecycle value. This paper contributes a maturity matrix based on important practices and lessons learned through extensive interview based case studies of three tactical aircraft programs, including experiences from more than 100 interviews.
Resumo:
The goal of this article is to reveal the computational structure of modern principle-and-parameter (Chomskian) linguistic theories: what computational problems do these informal theories pose, and what is the underlying structure of those computations? To do this, I analyze the computational complexity of human language comprehension: what linguistic representation is assigned to a given sound? This problem is factored into smaller, interrelated (but independently statable) problems. For example, in order to understand a given sound, the listener must assign a phonetic form to the sound; determine the morphemes that compose the words in the sound; and calculate the linguistic antecedent of every pronoun in the utterance. I prove that these and other subproblems are all NP-hard, and that language comprehension is itself PSPACE-hard.
Resumo:
One of the disadvantages of old age is that there is more past than future: this, however, may be turned into an advantage if the wealth of experience and, hopefully, wisdom gained in the past can be reflected upon and throw some light on possible future trends. To an extent, then, this talk is necessarily personal, certainly nostalgic, but also self critical and inquisitive about our understanding of the discipline of statistics. A number of almost philosophical themes will run through the talk: search for appropriate modelling in relation to the real problem envisaged, emphasis on sensible balances between simplicity and complexity, the relative roles of theory and practice, the nature of communication of inferential ideas to the statistical layman, the inter-related roles of teaching, consultation and research. A list of keywords might be: identification of sample space and its mathematical structure, choices between transform and stay, the role of parametric modelling, the role of a sample space metric, the underused hypothesis lattice, the nature of compositional change, particularly in relation to the modelling of processes. While the main theme will be relevance to compositional data analysis we shall point to substantial implications for general multivariate analysis arising from experience of the development of compositional data analysis…
Resumo:
We consider the optimization problem of safety stock placement in a supply chain, as formulated in [1]. We prove that this problem is NP-Hard for supply chains modeled as general acyclic networks. Thus, we do not expect to find a polynomial-time algorithm for safety stock placement for a general-network supply chain.