809 resultados para parallel translation
Resumo:
Control of protein synthesis is a key step in the regulation of gene expression during apoptosis and the heat shock response. Under such conditions, cap-dependent translation is impaired and Internal Ribosome Entry Site (IRES)-dependent translation plays a major role in mammalian cells. Although the role of IRES-dependent translation during apoptosis has been mainly studied in mammals, its role in the translation of Drosophila apoptotic genes has not been yet studied. The observation that the Drosophila mutant embryos for the cap-binding protein, the eukaryotic initiation factor eIF4E, exhibits increased apoptosis in correlation with up-regulated proapoptotic gene reaper (rpr) transcription constitutes the first evidence for the existence of a cap-independent mechanism for the translation of Drosophila proapoptotic genes. The mechanism of translation of rpr and other proapoptotic genes was investigated in this work. We found that the 5 UTR of rpr mRNA drives translation in an IRES-dependent manner. It promotes the translation of reporter RNAs in vitro either in the absence of cap, in the presence of cap competitors, or in extracts derived from heat shocked and eIF4E mutant embryos and in vivo in cells transfected with reporters bearing a non functional cap structure, indicating that cap recognition is not required in rpr mRNA for translation. We also show that rpr mRNA 5 UTR exhibits a high degree of similarity with that of Drosophila heat shock protein 70 mRNA (hsp70), an antagonist of apoptosis, and that both are able to conduct IRES-mediated translation. The proapoptotic genes head involution defective (hid) and grim, but not sickle, also display IRES activity. Studies of mRNA association to polysomes in embryos indicate that both rpr, hsp70, hid and grim endogenous mRNAs are recruited to polysomes in embryos in which apoptosis or thermal stress was induced. We conclude that hsp70 and, on the other hand, rpr, hid and grim which are antagonizing factors during apoptosis, use a similar mechanism for protein synthesis. The outcome for the cell would thus depend on which protein is translated under a given stress condition. Factors involved in the differential translation driven by these IRES could play an important role. For this purpose, we undertook the identification of the ribonucleoprotein (RNP) complexes assembled onto the 5 UTR of rpr mRNA. We established a tobramycin-affinity-selection protocol that allows the purification of specific RNP that can be further analyzed by mass spectrometry. Several RNA binding proteins were identified as part of the rpr 5 UTR RNP complex, some of which have been related to IRES activity. The involvement of one of them, the La antigen, in the translation of rpr mRNA, was established by RNA-crosslinking experiments using recombinant protein and rpr 5 UTR and by the analysis of the translation efficiency of reporter mRNAs in Drosophila cells after knock down of the endogenous La by RNAi experiments. Several uncharacterized proteins were also identified, suggesting that they might play a role during translation, during the assembly of the translational machinery or in the priming of the mRNA before ribosome recognition. Our data provide evidence for the involvement of La antigen in the translation of rpr mRNA and set a protocol for purification of tagged-RNA-protein complexes from cytoplasmic extracts. To further understand the mechanisms of translation initiation in Drosophila, we analyzed the role of eIF4B on cap-dependent and cap-independent translation. We showed that eIF4B is mostly involved in cap-, but not IRES-dependent translation as it happens in mammals.
Resumo:
In this publication, we report on an online survey that was carried out among parallel programmers. More than 250 people worldwide have submitted answers to our questions, and their responses are analyzed here. Although not statistically sound, the data we provide give useful insights about which parallel programming systems and languages are known and in actual use. For instance, the collected data indicate that for our survey group MPI and (to a lesser extent) C are the most widely used parallel programming system and language, respectively.
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
This paper contributes to the study of Freely Rewriting Restarting Automata (FRR-automata) and Parallel Communicating Grammar Systems (PCGS), which both are useful models in computational linguistics. For PCGSs we study two complexity measures called 'generation complexity' and 'distribution complexity', and we prove that a PCGS Pi, for which the generation complexity and the distribution complexity are both bounded by constants, can be transformed into a freely rewriting restarting automaton of a very restricted form. From this characterization it follows that the language L(Pi) generated by Pi is semi-linear, that its characteristic analysis is of polynomial size, and that this analysis can be computed in polynomial time.
Resumo:
Lateinische Schriftsteller im Original zu lesen, fällt vielen Schülerinnen und Schülern in der Lektürephase des Lateinunterrichts schwer. In der vorliegenden Dissertation wird untersucht, inwiefern gezielter Einsatz von Lernstrategien das Textverständnis verbessern kann. Strategisches Arbeiten mit Texten kann bereits zu einem sehr frühen Zeitpunkt in schriftbasierten Kulturen nachgewiesen werden. In dieser Arbeit werden Quellentexte aus der griechisch-römischen Antike und dem Mittelalter hinsichtlich texterschließender Strategien untersucht, systematisiert, kommentiert und im modernen Lateinunterricht eingesetzt. Dabei arbeiten die Schülerinnen und Schüler selbstgesteuert und forschend-entdeckend mit Reproduktionen antiker Papyri und Pergamente. Im Laufe des Unterrichtsprojektes, das ich CLAVIS, lat. für „Schlüssel“, nenne, lernen die Schüler im Zusammenhang mit Fachinhalten des Lateinunterrichts sechs antike Strategien der Texterschließung kennen. Diese Strategien werden heute noch genauso verwendet wie vor 2000 Jahren. Unter Berücksichtigung der Erkenntnisse der modernen Lernstrategieforschung wurden die Strategien ausgewählt, die als besonders effektive Maßnahmen zur Förderung von Textverständnis beurteilt werden, nämlich CONIUGATIO: Vorwissen aktivieren, LEGERE: mehrfaches und möglichst lautes Lesen, ACCIPERE: Hilfen annehmen, VERTERE: Übersetzen mit System, INTERROGARE: Fragen zulassen, SUMMA: Zusammenfassung erstellen. Ziel von CLAVIS ist es, Schülern ein Werkzeug zur systematischen Texterschließung an die Hand zu geben, das leicht zu merken ist und flexibel auf Texte jeder Art und jeder Sprache angewendet werden kann. Um die Effektivität des Unterrichtsprojektes CLAVIS zu überprüfen, wurde mit zwei parallel geführten 10. Klassen am Johann-Schöner-Gymnasium in Karlstadt im Schuljahr 2009/10 eine Vortest-Nachtest-Studie durchgeführt. Eine der Klassen wurde als Experimentalgruppe mit Intervention in Form von CLAVIS unterrichtet, die andere Klasse, die die Kontrollgruppe bildete, erhielt kein strategisches Training. Ein Fragebogen lieferte Informationen zur Vorgehensweise der Schüler bei der Textbearbeitung in Vortest und Nachtest (jeweils eine Übersetzung eines lateinischen Textes in identischem Schwierigkeitsgrad). Die Auswertung der Daten zeigte deutlich, dass Textverständnis und Übersetzungsfähigkeit sich bei denjenigen Schülern verbesserten, die die CLAVIS-Strategien im Nachtest angewendet hatten. Im Zusammenhang mit der Neugestaltung der Lehrpläne auf dem Hintergrund der Kompetenzorientierung ergeben sich für das Fach Latein neue Chancen, nicht nur inhaltlich wertvolle Zeugnisse der Antike zur allgemeinen, zweckfreien Persönlichkeitsbildung von Schülerinnen und Schülern einzusetzen, sondern gezielt Strategien zu vermitteln, die im Hinblick auf die in einer Informationsgesellschaft unverzichtbare Sprach- und Textkompetenz einen konkreten Nutzen haben.
Resumo:
In dieser Arbeit ist die zentrale Frage, warum dicistronische mRNAs, eine für Eukaryoten untypische Organisation, existieren und wie die Translation des zweiten offenen Leserasters initiiert wird. In sieben von neun anfänglich ausgewählten Genkassetten werden tatsächlich nur dicistronische und keine monocistronischen Transkripte gebildet. Im Laufe der Evolution scheint diese Organisation nicht immer erhalten zu bleiben - es finden sich Hinweise für einen operonartigen Aufbau. Nach Transformation mit einem dicistronischen Reporterkonstrukt und in in vitro Translations-Assays weisen die beiden Genkassetten CG31311 und CG33009 eine interne ribosomale Eintrittstelle (IRES) auf, welche die Translation des zweiten Cistrons einleiten kann. Diese beiden IRESs lassen sich in einen Bereich von unter 100 nt eingrenzen. Die Funktionalität der beiden nachgewiesenen IRESs konnte in vivo in der männlichen Keimbahn von Drosophila bestätigt werden, nachdem das Vorhandensein von kryptischen Promotoren in diesen Bereichen ausgeschlossen wurde. Die anderen fünf Genkassetten hingegen zeigen keine IRES-Aktivität und nutzen wahrscheinlich alternative Methoden wie das leaky scanning oder ribosomal shunting zur Translation des zweiten Cistrons. In weiterführenden Analysen wurden sehr komplexe Expressionsmuster beobachtet, die nicht offensichtlich mit der beschriebenen mRNA Organisation in Einklang zu bringen sind. Bei der Genkassette CG33009 zum Beispiel wird das erste Protein während der gesamten Spermatogenese in den Keimzellen synthetisiert, wohingegen das zweite IRES-abhängig translatierte Protein in den die Keimzellen umschließenden Cystenzellen und zusätzlich in den elongierten Spermatiden auftritt. Diese zusätzliche Expression könnte auf Transportprozessen oder Neusynthese beruhen. Die Cystenzell-spezische Expression eines Fusionskonstruktes führte jedoch nicht zum Nachweis des Fusionsproteins in den Keimzellen. Somit ist eine durch die IRES-vermittelte Neusynthese in den elongierten Spermatiden wahrscheinlicher. Ein Verlust dieses IRES-abhängig translatierten Proteins in den Cystenzellen bringt die Spermatogenese zum Erliegen und belegt somit dessen essentielle Funktion. Bei der Genkassette CG31311 kommt es auch zu einer bemerkenswerten Auffälligkeit in der Expression. Während im Hodengewebe große Mengen an Transkript vorhanden sind, die aber nicht zu nachweisbaren Mengen an Protein führen, lässt sich in den Ommatidien ein differenziertes Expressionsmuster für beide Proteine dokumentieren, obwohl die Transkriptmenge hier unterhalb der Nachweisgrenze liegt. Diese Beobachtung suggeriert eine drastische Kontrolle auf Translationsebene, die für das Hodengewebe zum Beispiel in einer Verzögerung der Translation bis nach der Befruchtung bestehen könnte (paternale mRNA). Erste Ansätze zeigen die Interaktion der IRES von CG33009 mit RNA-bindenden Proteinen, potentiellen ITAFs (IRES trans-acting factors), deren Bindung sequenzspezisch erfolgt. In weiteren Experimenten wäre zu testen, ob die hier identifizierten IRESs mit den gleichen oder mit unterschiedlichen Proteinen interagieren.
Resumo:
In der vorliegenden Dissertation werden Systeme von parallel arbeitenden und miteinander kommunizierenden Restart-Automaten (engl.: systems of parallel communicating restarting automata; abgekürzt PCRA-Systeme) vorgestellt und untersucht. Dabei werden zwei bekannte Konzepte aus den Bereichen Formale Sprachen und Automatentheorie miteinander vescrknüpft: das Modell der Restart-Automaten und die sogenannten PC-Systeme (systems of parallel communicating components). Ein PCRA-System besteht aus endlich vielen Restart-Automaten, welche einerseits parallel und unabhängig voneinander lokale Berechnungen durchführen und andererseits miteinander kommunizieren dürfen. Die Kommunikation erfolgt dabei durch ein festgelegtes Kommunikationsprotokoll, das mithilfe von speziellen Kommunikationszuständen realisiert wird. Ein wesentliches Merkmal hinsichtlich der Kommunikationsstruktur in Systemen von miteinander kooperierenden Komponenten ist, ob die Kommunikation zentralisiert oder nichtzentralisiert erfolgt. Während in einer nichtzentralisierten Kommunikationsstruktur jede Komponente mit jeder anderen Komponente kommunizieren darf, findet jegliche Kommunikation innerhalb einer zentralisierten Kommunikationsstruktur ausschließlich mit einer ausgewählten Master-Komponente statt. Eines der wichtigsten Resultate dieser Arbeit zeigt, dass zentralisierte Systeme und nichtzentralisierte Systeme die gleiche Berechnungsstärke besitzen (das ist im Allgemeinen bei PC-Systemen nicht so). Darüber hinaus bewirkt auch die Verwendung von Multicast- oder Broadcast-Kommunikationsansätzen neben Punkt-zu-Punkt-Kommunikationen keine Erhöhung der Berechnungsstärke. Desweiteren wird die Ausdrucksstärke von PCRA-Systemen untersucht und mit der von PC-Systemen von endlichen Automaten und mit der von Mehrkopfautomaten verglichen. PC-Systeme von endlichen Automaten besitzen bekanntermaßen die gleiche Ausdrucksstärke wie Einwegmehrkopfautomaten und bilden eine untere Schranke für die Ausdrucksstärke von PCRA-Systemen mit Einwegkomponenten. Tatsächlich sind PCRA-Systeme auch dann stärker als PC-Systeme von endlichen Automaten, wenn die Komponenten für sich genommen die gleiche Ausdrucksstärke besitzen, also die regulären Sprachen charakterisieren. Für PCRA-Systeme mit Zweiwegekomponenten werden als untere Schranke die Sprachklassen der Zweiwegemehrkopfautomaten im deterministischen und im nichtdeterministischen Fall gezeigt, welche wiederum den bekannten Komplexitätsklassen L (deterministisch logarithmischer Platz) und NL (nichtdeterministisch logarithmischer Platz) entsprechen. Als obere Schranke wird die Klasse der kontextsensitiven Sprachen gezeigt. Außerdem werden Erweiterungen von Restart-Automaten betrachtet (nonforgetting-Eigenschaft, shrinking-Eigenschaft), welche bei einzelnen Komponenten eine Erhöhung der Berechnungsstärke bewirken, in Systemen jedoch deren Stärke nicht erhöhen. Die von PCRA-Systemen charakterisierten Sprachklassen sind unter diversen Sprachoperationen abgeschlossen und einige Sprachklassen sind sogar abstrakte Sprachfamilien (sogenannte AFL's). Abschließend werden für PCRA-Systeme spezifische Probleme auf ihre Entscheidbarkeit hin untersucht. Es wird gezeigt, dass Leerheit, Universalität, Inklusion, Gleichheit und Endlichkeit bereits für Systeme mit zwei Restart-Automaten des schwächsten Typs nicht semientscheidbar sind. Für das Wortproblem wird gezeigt, dass es im deterministischen Fall in quadratischer Zeit und im nichtdeterministischen Fall in exponentieller Zeit entscheidbar ist.
Resumo:
Machine translation has been a particularly difficult problem in the area of Natural Language Processing for over two decades. Early approaches to translation failed since interaction effects of complex phenomena in part made translation appear to be unmanageable. Later approaches to the problem have succeeded (although only bilingually), but are based on many language-specific rules of a context-free nature. This report presents an alternative approach to natural language translation that relies on principle-based descriptions of grammar rather than rule-oriented descriptions. The model that has been constructed is based on abstract principles as developed by Chomsky (1981) and several other researchers working within the "Government and Binding" (GB) framework. Thus, the grammar is viewed as a modular system of principles rather than a large set of ad hoc language-specific rules.
Resumo:
This thesis defines Pi, a parallel architecture interface that separates model and machine issues, allowing them to be addressed independently. This provides greater flexibility for both the model and machine builder. Pi addresses a set of common parallel model requirements including low latency communication, fast task switching, low cost synchronization, efficient storage management, the ability to exploit locality, and efficient support for sequential code. Since Pi provides generic parallel operations, it can efficiently support many parallel programming models including hybrids of existing models. Pi also forms a basis of comparison for architectural components.
Resumo:
This report addresses the problem of acquiring objects using articulated robotic hands. Standard grasps are used to make the problem tractable, and a technique is developed for generalizing these standard grasps to increase their flexibility to variations in the problem geometry. A generalized grasp description is applied to a new problem situation using a parallel search through hand configuration space, and the result of this operation is a global overview of the space of good solutions. The techniques presented in this report have been implemented, and the results are verified using the Salisbury three-finger robotic hand.
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Resumo:
This thesis presents a new actuator system consisting of a micro-actuator and a macro-actuator coupled in parallel via a compliant transmission. The system is called the Parallel Coupled Micro-Macro Actuator, or PaCMMA. In this system, the micro-actuator is capable of high bandwidth force control due to its low mass and direct-drive connection to the output shaft. The compliant transmission of the macro-actuator reduces the impedance (stiffness) at the output shaft and increases the dynamic range of force. Performance improvement over single actuator systems was expected in force control, impedance control, force distortion and reduction of transient impact forces. A set of quantitative measures is proposed and the actuator system is evaluated against them: Force Control Bandwidth, Position Bandwidth, Dynamic Range, Impact Force, Impedance ("Backdriveability'"), Force Distortion and Force Performance Space. Several theoretical performance limits are derived from the saturation limits of the system. A control law is proposed and control system performance is compared to the theoretical limits. A prototype testbed was built using permanenent magnet motors and an experimental comparison was performed between this actuator concept and two single actuator systems. The following performance was observed: Force bandwidth of 56Hz, Torque Dynamic Range of 800:1, Peak Torque of 1040mNm, Minimum Torque of 1.3mNm. Peak Impact Force was reduced by an order of magnitude. Distortion at small amplitudes was reduced substantially. Backdriven impedance was reduced by 2-3 orders of magnitude. This actuator system shows promise for manipulator design as well as psychophysical tests of human performance.
Resumo:
The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.
Resumo:
The HMAX model has recently been proposed by Riesenhuber & Poggio as a hierarchical model of position- and size-invariant object recognition in visual cortex. It has also turned out to model successfully a number of other properties of the ventral visual stream (the visual pathway thought to be crucial for object recognition in cortex), and particularly of (view-tuned) neurons in macaque inferotemporal cortex, the brain area at the top of the ventral stream. The original modeling study only used ``paperclip'' stimuli, as in the corresponding physiology experiment, and did not explore systematically how model units' invariance properties depended on model parameters. In this study, we aimed at a deeper understanding of the inner workings of HMAX and its performance for various parameter settings and ``natural'' stimulus classes. We examined HMAX responses for different stimulus sizes and positions systematically and found a dependence of model units' responses on stimulus position for which a quantitative description is offered. Interestingly, we find that scale invariance properties of hierarchical neural models are not independent of stimulus class, as opposed to translation invariance, even though both are affine transformations within the image plane.
Resumo:
Human object recognition is generally considered to tolerate changes of the stimulus position in the visual field. A number of recent studies, however, have cast doubt on the completeness of translation invariance. In a new series of experiments we tried to investigate whether positional specificity of short-term memory is a general property of visual perception. We tested same/different discrimination of computer graphics models that were displayed at the same or at different locations of the visual field, and found complete translation invariance, regardless of the similarity of the animals and irrespective of direction and size of the displacement (Exp. 1 and 2). Decisions were strongly biased towards same decisions if stimuli appeared at a constant location, while after translation subjects displayed a tendency towards different decisions. Even if the spatial order of animal limbs was randomized ("scrambled animals"), no deteriorating effect of shifts in the field of view could be detected (Exp. 3). However, if the influence of single features was reduced (Exp. 4 and 5) small but significant effects of translation could be obtained. Under conditions that do not reveal an influence of translation, rotation in depth strongly interferes with recognition (Exp. 6). Changes of stimulus size did not reduce performance (Exp. 7). Tolerance to these object transformations seems to rely on different brain mechanisms, with translation and scale invariance being achieved in principle, while rotation invariance is not.