972 resultados para Compliant parallel mechanisms
Resumo:
Knowledge on how ligaments and articular surfaces guide passive motion at the human ankle joint complex is fundamental for the design of relevant surgical treatments. The dissertation presents a possible improvement of this knowledge by a new kinematic model of the tibiotalar articulation. In this dissertation two one-DOF spatial equivalent mechanisms are presented for the simulation of the passive motion of the human ankle joint: the 5-5 fully parallel mechanism and the fully parallel spherical wrist mechanism. These mechanisms are based on the main anatomical structures of the ankle joint, namely the talus/calcaneus and the tibio/fibula bones at their interface, and the TiCaL and CaFiL ligaments. In order to show the accuracy of the models and the efficiency of the proposed procedure, these mechanisms are synthesized from experimental data and the results are compared with those obtained both during experimental sessions and with data published in the literature. Experimental results proved the efficiency of the proposed new mechanisms to simulate the ankle passive motion and, at the same time, the potentiality of the mechanism to replicate the ankle’s main anatomical structures quite well. The new mechanisms represent a powerful tool for both pre-operation planning and new prosthesis design.
Resumo:
Dielectric Elastomers (DE) are incompressible dielectrics which can experience deviatoric (isochoric) finite deformations in response to applied large electric fields. Thanks to the strong electro-mechanical coupling, DE intrinsically offer great potentialities for conceiving novel solid-state mechatronic devices, in particular linear actuators, which are more integrated, lightweight, economic, silent, resilient and disposable than equivalent devices based on traditional technologies. Such systems may have a huge impact in applications where the traditional technology does not allow coping with the limits of weight or encumbrance, and with problems involving interaction with humans or unknown environments. Fields such as medicine, domotic, entertainment, aerospace and transportation may profit. For actuation usage, DE are typically shaped in thin films coated with compliant electrodes on both sides and piled one on the other to form a multilayered DE. DE-based Linear Actuators (DELA) are entirely constituted by polymeric materials and their overall performance is highly influenced by several interacting factors; firstly by the electromechanical properties of the film, secondly by the mechanical properties and geometry of the polymeric frame designed to support the film, and finally by the driving circuits and activation strategies. In the last decade, much effort has been focused in the devolvement of analytical and numerical models that could explain and predict the hyperelastic behavior of different types of DE materials. Nevertheless, at present, the use of DELA is limited. The main reasons are 1) the lack of quantitative and qualitative models of the actuator as a whole system 2) the lack of a simple and reliable design methodology. In this thesis, a new point of view in the study of DELA is presented which takes into account the interaction between the DE film and the film supporting frame. Hyperelastic models of the DE film are reported which are capable of modeling the DE and the compliant electrodes. The supporting frames are analyzed and designed as compliant mechanisms using pseudo-rigid body models and subsequent finite element analysis. A new design methodology is reported which optimize the actuator performances allowing to specifically choose its inherent stiffness. As a particular case, the methodology focuses on the design of constant force actuators. This class of actuators are an example of how the force control could be highly simplified. Three new DE actuator concepts are proposed which highlight the goodness of the proposed method.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
Nox4 is a member of the NADPH oxidase family, which represents a major source of reactive oxygen species (ROS) in the vascular wall. Nox4-mediated ROS production mainly depends on the expression levels of the enzyme. The aim of my study was to investigate the mechanisms of Nox4 transcription regulation by histone deacetylases (HDAC). Treatment of human umbilical vein endothelial cells (HUVEC) and HUVEC-derived EA.hy926 cells with the pan-HDAC inhibitor scriptaid led to a marked decrease in Nox4 mRNA expression. A similar down-regulation of Nox4 mRNA expression was observed by siRNA-mediated knockdown of HDAC3. HDAC inhibition in endothelial cells was associated with enhanced histone acetylation, increased chromatin accessibility in the human Nox4 promoter region, with no significant changes in DNA methylation. In addition, the present study provided evidence that c-Jun played an important role in controlling Nox4 transcription. Knockdown of c-Jun with siRNA led to a down-regulation of Nox4 mRNA expression. In response to scriptaid treatment, the binding of c-Jun to the Nox4 promoter region was reduced despite the open chromatin structure. In parallel, the binding of RNA polymerase IIa to the Nox4 promoter was significantly inhibited as well, which may explain the reduction in Nox4 transcription. In conclusion, HDAC inhibition decreases Nox4 transcription in human endothelial cells by preventing the binding of transcription factor(s) and polymerase(s) to the Nox4 promoter, most likely because of a hyperacetylation-mediated steric inhibition. In addition, HDAC inhibition-induced Nox4 downregulation may also involves microRNA-mediated mRNA destabilization, because the effect of the scriptaid could be partially blocked by DICER1 knockdown or by transcription inhibition.
Resumo:
The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.
Resumo:
The skeletal muscle phenotype is subject to considerable malleability depending on use. Low-intensity endurance type exercise leads to qualitative changes of muscle tissue characterized mainly by an increase in structures supporting oxygen delivery and consumption. High-load strength-type exercise leads to growth of muscle fibers dominated by an increase in contractile proteins. In low-intensity exercise, stress-induced signaling leads to transcriptional upregulation of a multitude of genes with Ca2+ signaling and the energy status of the muscle cells sensed through AMPK being major input determinants. Several parallel signaling pathways converge on the transcriptional co-activator PGC-1α, perceived as being the coordinator of much of the transcriptional and posttranscriptional processes. High-load training is dominated by a translational upregulation controlled by mTOR mainly influenced by an insulin/growth factor-dependent signaling cascade as well as mechanical and nutritional cues. Exercise-induced muscle growth is further supported by DNA recruitment through activation and incorporation of satellite cells. Crucial nodes of strength and endurance exercise signaling networks are shared making these training modes interdependent. Robustness of exercise-related signaling is the consequence of signaling being multiple parallel with feed-back and feed-forward control over single and multiple signaling levels. We currently have a good descriptive understanding of the molecular mechanisms controlling muscle phenotypic plasticity. We lack understanding of the precise interactions among partners of signaling networks and accordingly models to predict signaling outcome of entire networks. A major current challenge is to verify and apply available knowledge gained in model systems to predict human phenotypic plasticity.
Resumo:
Dronedarone is a new antiarrhythmic drug with an amiodarone-like benzofuran structure. Shortly after its introduction, dronedarone became implicated in causing severe liver injury. Amiodarone is a well-known mitochondrial toxicant. The aim of our study was to investigate mechanisms of hepatotoxicity of dronedarone in vitro and to compare them with amiodarone. We used isolated rat liver mitochondria, primary human hepatocytes, and the human hepatoma cell line HepG2, which were exposed acutely or up to 24h. After exposure of primary hepatocytes or HepG2 cells for 24h, dronedarone and amiodarone caused cytotoxicity and apoptosis starting at 20 and 50 µM, respectively. The cellular ATP content started to decrease at 20 µM for both drugs, suggesting mitochondrial toxicity. Inhibition of the respiratory chain required concentrations of ~10 µM and was caused by an impairment of complexes I and II for both drugs. In parallel, mitochondrial accumulation of reactive oxygen species (ROS) was observed. In isolated rat liver mitochondria, acute treatment with dronedarone decreased the mitochondrial membrane potential, inhibited complex I, and uncoupled the respiratory chain. Furthermore, in acutely treated rat liver mitochondria and in HepG2 cells exposed for 24h, dronedarone started to inhibit mitochondrial β-oxidation at 10 µM and amiodarone at 20 µM. Similar to amiodarone, dronedarone is an uncoupler and an inhibitor of the mitochondrial respiratory chain and of β-oxidation both acutely and after exposure for 24h. Inhibition of mitochondrial function leads to accumulation of ROS and fatty acids, eventually leading to apoptosis and/or necrosis of hepatocytes. Mitochondrial toxicity may be an explanation for hepatotoxicity of dronedarone in vivo.
Resumo:
P-GENESIS is an extension to the GENESIS neural simulator that allows users to take advantage of parallel machines to speed up the simulation of their network models or concurrently simulate multiple models. P-GENESIS adds several commands to the GENESIS script language that let a script running on one processor execute remote procedure calls on other processors, and that let a script synchronize its execution with the scripts running on other processors. We present here some brief comments on the mechanisms underlying parallel script execution. We also offer advice on parallelizing parameter searches, partitioning network models, and selecting suitable parallel hardware on which to run P-GENESIS.
Resumo:
Parallel phenotypic divergence in replicated adaptive radiations could either result from parallel genetic divergence in response to similar divergent selec- tion regimes or from equivalent phenotypically plastic response to the repeated occurrence of contrasting environments. In post-glacial fish, repli- cated divergence in phenotypes along the benthic-limnetic habitat axis is commonly observed. Here, we use two benthic-limnetic species pairs of whitefish from two Swiss lakes, raised in a common garden design, with reciprocal food treatments in one species pair, to experimentally measure whether feeding efficiency on benthic prey has a genetic basis or whether it underlies phenotypic plasticity (or both). To do so, we offered experimental fish mosquito larvae, partially burried in sand, and measured multiple feed- ing efficiency variables. Our results reveal both, genetic divergence as well as phenotypically plastic divergence in feeding efficiency, with the pheno- typically benthic species raised on benthic food being the most efficient forager on benthic prey. This indicates that both, divergent natural selection on genetically heritable traits and adaptive phenotypic plasticity, are likely important mechanisms driving phenotypic divergence in adaptive radiation.
Resumo:
Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.
Resumo:
We present a parallel graph narrowing machine, which is used to implement a functional logic language on a shared memory multiprocessor. It is an extensión of an abstract machine for a purely functional language. The result is a programmed graph reduction machine which integrates the mechanisms of unification, backtracking, and independent and-parallelism. In the machine, the subexpressions of an expression can run in parallel. In the case of backtracking, the structure of an expression is used to avoid the reevaluation of subexpressions as far as possible. Deterministic computations are detected. Their results are maintained and need not be reevaluated after backtracking.
Resumo:
The fracture behavior parallel to the fibers of an E-glass/epoxy unidirectional laminate was studied by means of three-point tests on notched beams. Selected tests were carried out within a scanning electron microscope to ascertain the damage and fracture micromechanisms upon loading. The mechanical behavior of the notched beam was simulated within the framework of the embedded cell model, in which the actual composite microstructure was resolved in front of the notch tip. In addition, matrix and interface properties were independently measured in situ using a nanoindentor. The numerical simulations very accurately predicted the macroscopic response of the composite as well as the damage development and crack growth in front of the notch tip, demonstrating the ability of the embedded cell approach to simulate the fracture behavior of heterogeneous materials. Finally, this methodology was exploited to ascertain the influence of matrix and interface properties on the intraply toughness.
Resumo:
The hearing organ of the inner ear was the last of the paired sense organs of amniotes to undergo formative evolution. As a mechanical sensory organ, the inner-ear hearing organ's function depends highly on its physical structure. Comparative studies suggest that the hearing organ of the earliest amniote vertebrates was small and simple, but possessed hair cells with a cochlear amplifier mechanism, electrical frequency tuning, and incipient micromechanical tuning. The separation of the different groups of amniotes from the stem reptiles occurred relatively early, with the ancestors of the mammals branching off first, approximately 320 million years ago. The evolution of the hearing organ in the three major lines of the descendents of the stem reptiles (e.g., mammals, birds-crocodiles, and lizards-snakes) thus occurred independently over long periods of time. Dramatic and parallel improvements in the middle ear initiated papillar elongation in all lineages, accompanied by increased numbers of sensory cells with enhanced micromechanical tuning and group-specific hair-cell specializations that resulted in unique morphological configurations. This review aims not only to compare structure and function across classification boundaries (the comparative approach), but also to assess how and to what extent fundamental mechanisms were influenced by selection pressures in times past (the phylogenetic viewpoint).
Resumo:
Mechanisms of speciation are not well understood, despite decades of study. Recent work has focused on how natural and sexual selection cause sexual isolation. Here, we investigate the roles of divergent natural and sexual selection in the evolution of sexual isolation between sympatric species of threespine sticklebacks. We test the importance of morphological and behavioral traits in conferring sexual isolation and examine to what extent these traits have diverged in parallel between multiple, independently evolved species pairs. We use the patterns of evolution in ecological and mating traits to infer the likely nature of selection on sexual isolation. Strong parallel evolution implicates ecologically based divergent natural and/or sexual selection, whereas arbitrary directionality implicates nonecological sexual selection or drift. In multiple pairs we find that sexual isolation arises in the same way: assortative mating on body size and asymmetric isolation due to male nuptial color. Body size and color have diverged in a strongly parallel manner, similar to ecological traits. The data implicate ecologically based divergent natural and sexual selection as engines of speciation in this group.
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.