923 resultados para Compliant parallel mechanism
Resumo:
Lava dome eruptions are sometimes characterised by large periodic fluctuations in extrusion rate over periods of hours that may be accompanied by Vulcanian explosions and pyroclastic flows. We consider a simple system of nonlinear equations describing a 1D flow of lava extrusion through a deep elastic dyke feeding a shallower cylindrical conduit in order to simulate this short-period cyclicity. Stick-slip conditions depending on a critical shear stress are assumed at the wall boundary of the cylindrical conduit. By analogy with the behaviour of industrial polymers in a plastic extruder, the elastic dyke acts like a barrel and the shallower cylindrical portion of the conduit as a die for the flow of magma acting as a polymer. When we applied the model to the Soufrière Hills Volcano, Montserrat, for which the key parameters have been evaluated from previous studies, cyclic extrusions with periods from 3 to 30 h were readily simulated, matching observations. The model also reproduces the reduced period of cycles observed when a major unloading event occurs due to lava dome collapse.
Resumo:
Many virulence organelles of Gram-negative bacterial pathogens are assembled via the chaperone/ usher pathway. The chaperone transports organelle subunits across the periplasm to the outer membrane usher, where they are released and incorporated into growing fibers. Here, we elucidate the mechanism of the usher-targeting step in assembly of the Yersinia pestis F1 capsule at the atomic level. The usher interacts almost exclusively with the chaperone in the chaperone:subunit complex. In free chaperone, a pair of conserved proline residues at the beginning of the subunit-binding loop form a ‘‘proline lock’’ that occludes the usher-binding surface and blocks usher binding. Binding of the subunit to the chaperone rotates the proline lock away from the usher-binding surface, allowing the chaperone-subunit complex to bind to the usher. We show that the proline lock exists in other chaperone/usher systems and represents a general allosteric mechanism for selective targeting of chaperone:subunit complexes to the usher and for release and recycling of the free chaperone.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Resumo:
In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.
Resumo:
The fast increase in the size and number of databases demands data mining approaches that are scalable to large amounts of data. This has led to the exploration of parallel computing technologies in order to perform data mining tasks concurrently using several processors. Parallelization seems to be a natural and cost-effective way to scale up data mining technologies. One of the most important of these data mining technologies is the classification of newly recorded data. This paper surveys advances in parallelization in the field of classification rule induction.
Resumo:
Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.
Resumo:
An estimated 3% of the global population are infected with hepatitis C virus (HCV), and the majority of these individuals will develop chronic liver disease. As with other chronic viruses, establishment of persistent infection requires that HCV-infected cells must be refractory to a range of pro-apoptotic stimuli. In response to oxidative stress, amplification of an outward K(+) current mediated by the Kv2.1 channel, precedes the onset of apoptosis. We show here that in human hepatoma cells either infected with HCV or harboring an HCV subgenomic replicon, oxidative stress failed to initiate apoptosis via Kv2.1. The HCV NS5A protein mediated this effect by inhibiting oxidative stress-induced p38 MAPK phosphorylation of Kv2.1. The inhibition of a host cell K(+) channel by a viral protein is a hitherto undescribed viral anti-apoptotic mechanism and represents a potential target for antiviral therapy.
Resumo:
Atlantic Multidecadal Variability (AMV) is investigated in a millennial control simulation with the Kiel Climate Model (KCM), a coupled atmosphere–ocean–sea ice model. An oscillatory mode with approximately 60 years period and characteristics similar to observations is identified with the aid of three-dimensional temperature and salinity joint empirical orthogonal function analysis. The mode explains 30 % of variability on centennial and shorter timescales in the upper 2,000 m of the North Atlantic. It is associated with changes in the Atlantic Meridional Overturning Circulation (AMOC) of ±1–2 Sv and Atlantic Sea Surface Temperature (SST) of ±0.2 °C. AMV in KCM results from an out-of-phase interaction between horizontal and vertical ocean circulation, coupled through Irminger Sea convection. Wintertime convection in this region is mainly controlled by salinity anomalies transported by the Subpolar Gyre (SPG). Increased (decreased) dense water formation in this region leads to a stronger (weaker) AMOC after 15 years, and this in turn leads to a weaker (stronger) SPG after another 15 years. The key role of salinity variations in the subpolar North Atlantic for AMV is confirmed in a 1,000 year long simulation with salinity restored to model climatology: No low frequency variations in convection are simulated, and the 60 year mode of variability is absent.
Resumo:
Java is becoming an increasingly popular language for developing distributed and parallel scientific and engineering applications. Jini is a Java-based infrastructure developed by Sun that can allegedly provide all the services necessary to support distributed applications. It is the aim of this paper to explore and investigate the services and properties that Jini actually provides and match these against the needs of high performance distributed and parallel applications written in Java. The motivation for this work is the need to develop a distributed infrastructure to support an MPI-like interface to Java known as MPJ. In the first part of the paper we discuss the needs of MPJ, the parallel environment that we wish to support. In particular we look at aspects such as reliability and ease of use. We then move on to sketch out the Jini architecture and review the components and services that Jini provides. In the third part of the paper we critically explore a Jini infrastructure that could be used to support MPJ. Here we are particularly concerned with Jini's ability to support reliably a cocoon of MPJ processes executing in a heterogeneous envirnoment. In the final part of the paper we summarise our findings and report on future work being undertaken on Jini and MPJ.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
A new wire mechanism called Redundant Drive Wire Mechanism (RDWM) is proposed. The purpose of this paper is to build up the theory of a RDWM with fast motion and fine motion. First, the basic concepts of the proposed mechanism is presented. Second, the vector closure condition for the proposed mechanism is developed. Next, we present the basic equations, propose the basic structure of RDWM with the Internal DOF module, Double Actuation Modules and Precision Modules together with the properties of the mechanism. Finally, we conduct the simulation to show the validity of the RDWM.
Resumo:
Climate models consistently predict a strengthened Brewer–Dobson circulation in response to greenhouse gas (GHG)-induced climate change. Although the predicted circulation changes are clearly the result of changes in stratospheric wave drag, the mechanism behind the wave-drag changes remains unclear. Here, simulations from a chemistry–climate model are analyzed to show that the changes in resolved wave drag are largely explainable in terms of a simple and robust dynamical mechanism, namely changes in the location of critical layers within the subtropical lower stratosphere, which are known from observations to control the spatial distribution of Rossby wave breaking. In particular, the strengthening of the upper flanks of the subtropical jets that is robustly expected from GHG-induced tropospheric warming pushes the critical layers (and the associated regions of wave drag) upward, allowing more wave activity to penetrate into the subtropical lower stratosphere. Because the subtropics represent the critical region for wave driving of the Brewer–Dobson circulation, the circulation is thereby strengthened. Transient planetary-scale waves and synoptic-scale waves generated by baroclinic instability are both found to play a crucial role in this process. Changes in stationary planetary wave drag are not so important because they largely occur away from subtropical latitudes.
Resumo:
The huge warming of the Arctic that started in the early 1920s and lasted for almost two decades is one of the most spectacular climate events of the twentieth century. During the peak period 1930–40, the annually averaged temperature anomaly for the area 60°–90°N amounted to some 1.7°C. Whether this event is an example of an internal climate mode or is externally forced, such as by enhanced solar effects, is presently under debate. This study suggests that natural variability is a likely cause, with reduced sea ice cover being crucial for the warming. A robust sea ice–air temperature relationship was demonstrated by a set of four simulations with the atmospheric ECHAM model forced with observed SST and sea ice concentrations. An analysis of the spatial characteristics of the observed early twentieth-century surface air temperature anomaly revealed that it was associated with similar sea ice variations. Further investigation of the variability of Arctic surface temperature and sea ice cover was performed by analyzing data from a coupled ocean–atmosphere model. By analyzing climate anomalies in the model that are similar to those that occurred in the early twentieth century, it was found that the simulated temperature increase in the Arctic was related to enhanced wind-driven oceanic inflow into the Barents Sea with an associated sea ice retreat. The magnitude of the inflow is linked to the strength of westerlies into the Barents Sea. This study proposes a mechanism sustaining the enhanced westerly winds by a cyclonic atmospheric circulation in the Barents Sea region created by a strong surface heat flux over the ice-free areas. Observational data suggest a similar series of events during the early twentieth-century Arctic warming, including increasing westerly winds between Spitsbergen and Norway, reduced sea ice, and enhanced cyclonic circulation over the Barents Sea. At the same time, the North Atlantic Oscillation was weakening.