828 resultados para compression parallel
Resumo:
Singularities of robot manipulators have been intensely studied in the last decades by researchers of many fields. Serial singularities produce some local loss of dexterity of the manipulator, therefore it might be desirable to search for singularityfree trajectories in the jointspace. On the other hand, parallel singularities are very dangerous for parallel manipulators, for they may provoke the local loss of platform control, and jeopardize the structural integrity of links or actuators. It is therefore utterly important to avoid parallel singularities, while operating a parallel machine. Furthermore, there might be some configurations of a parallel manipulators that are allowed by the constraints, but nevertheless are unreachable by any feasible path. The present work proposes a numerical procedure based upon Morse theory, an important branch of differential topology. Such procedure counts and identify the singularity-free regions that are cut by the singularity locus out of the configuration space, and the disjoint regions composing the configuration space of a parallel manipulator. Moreover, given any two configurations of a manipulator, a feasible or a singularity-free path connecting them can always be found, or it can be proved that none exists. Examples of applications to 3R and 6R serial manipulators, to 3UPS and 3UPU parallel wrists, to 3UPU parallel translational manipulators, and to 3RRR planar manipulators are reported in the work.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
Gegenstand dieser Arbeit ist die Untersuchung der smektischen Phasen von Polysiloxanen mit flüssigkristallinen Seitengruppen (LC-Polysiloxane). Der erste Teil der vorliegenden Arbeit befasste sich mit der Herstellung verschiedener flüssigkristalliner ferroelektrischer Polysiloxane. Die Polymere wurden in Bezug auf das verwendete Polymerrückgrat (Homo- und Copolysiloxan) sowie durch den zusätzlichen Einbau von vernetzbaren Seitengruppen variiert. Im zweiten Teil der Arbeit wurden die Eigenschaften der smektischen Phasen der hergestellten Substanzen näher untersucht. Ein erster Untersuchungsgegenstand war das Dehnungsverhalten von freistehenden flüssigkristallinen Elastomerfilmen (LCE). Bei der Verwendung eines Polymers, in dem nur ein Teil des Polysiloxanrückgrats mit Seitengruppen substituiert ist, wird die uniaxiale Dehnung des Films parallel zu den smektischen Schichten durch eine gleichmäßige Kontraktion in der Filmebene und parallel zur Schichtnormalen ausgeglichen, was auf einen außergewöhnlich niedrigen smektischen Schichtkompressionsmodul zurückzuführen ist. Im Gegensatz dazu ist dieser Modul bei den Homopolymersystemen so groß, dass praktisch senkrecht zu den smektischen Schichten keine Kontraktion stattfindet. Ein zweiter Untersuchungsgegenstand der Netzwerkbildung bestand in der Bestimmung der dynamisch-mechanischen Eigenschaften der LC-Polysiloxane mittels eines Oszillationsrheometers. Hier erfolgten die Messungen von Speicher- und Verlustmodul in Abhängigkeit vom Polymerrückgrat und von der Vernetzung. Die unvernetzten Systeme zeigten in den smektischen Phasen (oberhalb Tg) noch – im wesentlichen – Festkörpereigenschaften (physikalische Vernetzung) mit einem dominierenden Speichermodul beim LC-Homopolysiloxan. Beim LC-Copolysiloxan haben beide Module eine gleiche Größenordnung. Am Phasenübergang in die isotrope Phase wurden beide Module absolut kleiner, der Verlustmodul aber relativ größer. In der isotropen Phase verhalten sich die LC-Polymere damit überwiegend wie viskose Schmelzen. Außerdem korrelierten die mittels DSC bestimmten Phasenübergangstemperaturen mit einer Änderung der dynamisch-mechanischen Eigenschaften. Nach der Vernetzung dominierte der Speichermodul sowohl beim LC-Homo- als auch beim LC-Copolysiloxan bis in die isotrope Phase, und es war aufgrund der Bildung einer festen Netzwerkstruktur keine Abhängigkeit der Module von Phasenübergängen mehr erkennbar. Als dritter Untersuchungsgegenstand wurde der Phasenübergang zwischen den beiden smektischen Phasen (SmC* nach SmA*) der flüssigkristallinen Polysiloxane näher behandelt. Als wichtigstes Ergebnis ist festzuhalten, dass die verdünnten LC-Polysiloxane an diesem Übergang fast keine Schichtdickenänderung aufweisen. Dazu wurde jeweils die röntgenographisch ermittelte Schichtdicke mit der aus den optischen Tiltwinkeln theoretisch berechneten Schichtdicke verglichen. Dadurch konnte gezeigt werden, dass sich die Phasenübergänge nach dem de Vries-Modell verhalten. Damit konnte zum ersten Mal an Polymersystemen ein de Vries-Verhalten nachgewiesen werden. Im Gegensatz dazu zeigte das Homopolysiloxan mit dem Dreiringmesogen beim Übergang von SmC* nach SmA* einen ausgeprägten Sprung in der Schichtdicke. Wie auch durch DSC-Messungen nachweisbar, lag ein Phasenübergang 1. Ordnung vor. Bei den LC-Copolysiloxanen liegt dagegen ein Phasenübergang 2. Ordnung vor. Schließlich wurde die Schichtdicke unter dem Einfluss der Vernetzung untersucht. Beim LC-Copolysiloxan mit dem Dreiringmesogen und einem Anteil an vernetzbaren Gruppen von 15 % wurde eine Stabilisierung der smektischen Phasen erreicht. Zum einen war die Änderung der Schichtdicke am SmC*-SmA*-Phasenübergang geringer im Vergleich zum unvernetzten System und zum anderen war noch 50 °C über der ursprünglichen Klärtemperatur eine smektische Schichtstruktur röntgenographisch nachzuweisen. Insgesamt ist es mit den verschiedenen Untersuchungsmethoden gelungen, einen systematischen Unterschied zwischen smektischen Homo- und Copolysiloxanen aufzuzeigen, der seinen Ursprung – aller Wahrscheinlichkeit nach – in der Mikrophasenseparation von Mesogenen und Polysiloxanketten findet.
Resumo:
Der Bedarf an hyperpolarisiertem 3He in Medizin und physikalischer Grundlagenforschung ist in den letzten ca. 10-15 Jahren sowohl in Bezug auf die zu Verfügung stehende Menge, als auch auf den benötigten Grad der Kernspinpolarisation stetig gestiegen. Gleichzeitig mußten Lösungen für die polarisationserhaltende Speicherung und den Transport gefunden werden, die je nach Anwendung anzupassen waren. Als Ergebnis kann mit dieser Arbeit ein in sich geschlossenes Gesamtkonzept vorgestellt werden, daß sowohl die entsprechenden Mengen für klinische Anwendungen, als auch höchste Polarisation für physikalische Grundlagenfor-schung zur Verfügung stellen kann. Verschiedene unabhängige Polarimetriemethoden zeigten in sich konsistente Ergebnisse und konnten, neben ihrer eigenen Weiterentwicklung, zu einer verläßlichen Charakterisierung des neuen Systems und auch der Transportzellen und –boxen eingesetzt werden. Die Polarisation wird mittels „Metastabilem Optischen Pumpen“ bei einem Druck von 1 mbar erzeugt. Dabei werden ohne Gasfluß Werte von P = 84% erreicht. Im Flußbetrieb sinkt die erreichbare Polarisation auf P ≈ 77%. Das 3He kann dann weitgehend ohne Polarisationsver-luste auf mehrere bar komprimiert und zu den jeweiligen Experimenten transportiert werden. Durch konsequente Weiterentwicklung der vorgestellten Polarisationseinheit an fast allen Komponenten kann somit jetzt bei einem Fluß von 0,8 barl/h eine Polarisation von Pmax = 77% am Auslaß der Apparatur erreicht werden. Diese skaliert linear mit dem Fluß, sodaß bei 3 barl/h die Polarisation immer noch bei ca. 60% liegt. Dabei waren die im Rahmen dieser Arbeit durchgeführten Verbesserungen an den Lasern, der Optik, der Kompressionseinheit, dem Zwischenspeicher und der Gasreinigung wesentlich für das Erreichen dieser Polarisatio-nen. Neben dem Einsatz eines neuen Faserlasersystems ist die hohe Gasreinheit und die lang-lebige Kompressionseinheit ein Schlüssel für diese Leistungsfähigkeit. Seit Herbst 2001 er-zeugte das System bereits über 2000 barl hochpolarisiertes 3He und ermöglichte damit zahl-reiche interdisziplinäre Experimente und Untersuchungen. Durch Verbesserungen an als Prototypen bereits vorhandenen Transportboxen und durch weitgehende Unterdrückung der Wandrelaxation in den Transportgefäßen aufgrund neuer Erkenntnisse über deren Ursachen stellen auch polarisationserhaltende Transporte über große Strecken kein Problem mehr dar. In unbeschichteten 1 Liter Kolben aus Aluminosilikatglä-sern werden nun problemlos Speicherzeiten von T1 > 200h erreicht. Im Rahmen des europäi-schen Forschungsprojektes „Polarized Helium to Image the Lung“ wurden während 19 Liefe-rungen 70barl 3He nach Sheffield (UK) und bei 13 Transporten 100 barl nach Kopenhagen (DK) per Flugzeug transportiert. Zusammenfassend konnte gezeigt werden, daß die Problematik der Kernspinpolarisationser-zeugung von 3He, die Speicherung, der Transport und die Verwendung des polarisierten Ga-ses in klinischer Diagnostik und physikalischen Grundlagenexperimenten weitgehend gelöst ist und das Gesamtkonzept die Voraussetzungen für allgemeine Anwendungen auf diesen Gebieten geschaffen hat.
Resumo:
The 3-UPU three degrees of freedom fully parallel manipulator, where U and P are for universal and prismatic pair respectively, is a very well known manipulator that can provide the platform with three degrees of freedom of pure translation, pure rotation or mixed translation and rotation with respect to the base, according to the relative directions of the revolute pair axes (each universal pair comprises two revolute pairs with intersecting and perpendicular axes). In particular, pure translational parallel 3-UPU manipulators (3-UPU TPMs) received great attention. Many studies have been reported in the literature on singularities, workspace, and joint clearance influence on the platform accuracy of this manipulator. However, much work has still to be done to reveal all the features this topology can offer to the designer when different architecture, i.e. different geometry are considered. Therefore, this dissertation will focus on this type of the 3-UPU manipulators. The first part of the dissertation presents six new architectures of the 3-UPU TPMs which offer interesting features to the designer. In the second part, a procedure is presented which is based on some indexes, in order to allows the designer to select the best architecture of the 3-UPU TPMs for a given task. Four indexes are proposed as stiffness, clearance, singularity and size of the manipulator in order to apply the procedure.
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
Parallel mechanisms show desirable characteristics such as a large payload to robot weight ratio, considerable stiffness, low inertia and high dynamic performances. In particular, parallel manipulators with fewer than six degrees of freedom have recently attracted researchers’ attention, as their employ may prove valuable in those applications in which a higher mobility is uncalled-for. The attention of this dissertation is focused on translational parallel manipulators (TPMs), that is on parallel manipulators whose output link (platform) is provided with a pure translational motion with respect to the frame. The first part deals with the general problem of the topological synthesis and classification of TPMs, that is it identifies the architectures that TPM legs must possess for the platform to be able to freely translate in space without altering its orientation. The second part studies both constraint and direct singularities of TPMs. In particular, special families of fully-isotropic mechanisms are identified. Such manipulators exhibit outstanding properties, as they are free from singularities and show a constant orthogonal Jacobian matrix throughout their workspace. As a consequence, both the direct and the inverse position problems are linear and the kinematic analysis proves straightforward.
Resumo:
The evaluation of structural performance of existing concrete buildings, built according to standards and materials quite different to those available today, requires procedures and methods able to cover lack of data about mechanical material properties and reinforcement detailing. To this end detailed inspections and test on materials are required. As a consequence tests on drilled cores are required; on the other end, it is stated that non-destructive testing (NDT) cannot be used as the only mean to get structural information, but can be used in conjunction with destructive testing (DT) by a representative correlation between DT and NDT. The aim of this study is to verify the accuracy of some formulas of correlation available in literature between measured parameters, i.e. rebound index, ultrasonic pulse velocity and compressive strength (SonReb Method). To this end a relevant number of DT and NDT tests has been performed on many school buildings located in Cesena (Italy). The above relationships have been assessed on site correlating NDT results to strength of core drilled in adjacent locations. Nevertheless, concrete compressive strength assessed by means of NDT methods and evaluated with correlation formulas has the advantage of being able to be implemented and used for future applications in a much more simple way than other methods, even if its accuracy is strictly limited to the analysis of concretes having the same characteristics as those used for their calibration. This limitation warranted a search for a different evaluation method for the non-destructive parameters obtained on site. To this aim, the methodology of neural identification of compressive strength is presented. Artificial Neural Network (ANN) suitable for the specific analysis were chosen taking into account the development presented in the literature in this field. The networks were trained and tested in order to detect a more reliable strength identification methodology.
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Resumo:
Massive parallel robots (MPRs) driven by discrete actuators are force regulated robots that undergo continuous motions despite being commanded through a finite number of states only. Designing a real-time control of such systems requires fast and efficient methods for solving their inverse static analysis (ISA), which is a challenging problem and the subject of this thesis. In particular, five Artificial intelligence methods are proposed to investigate the on-line computation and the generalization error of ISA problem of a class of MPRs featuring three-state force actuators and one degree of revolute motion.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.