993 resultados para Grew, Nehemiah, 1641-1712
Resumo:
Workspace analysis and optimization are important in a manipulator design. As the complete workspace of a 6-DOF manipulator is embedded into a 6-imensional space, it is difficult to quantify and qualify it. Most literatures only considered the 3-D sub workspaces of the complete 6-D workspace. In this paper, a finite-partition approach of the Special Euclidean group SE(3) is proposed based on the topology properties of SE(3), which is the product of Special Orthogonal group SO(3) and R^3. It is known that the SO(3) is homeomorphic to a solid ball D^3 with antipodal points identified while the geometry of R^3 can be regarded as a cuboid. The complete 6-D workspace SE(3) is at the first time parametrically and proportionally partitioned into a number of elements with uniform convergence based on its geometry. As a result, a basis volume element of SE(3) is formed by the product of a basis volume element of R^3 and a basis volume element of SO(3), which is the product of a basis volume element of D^3 and its associated integration measure. By this way, the integration of the complete 6-D workspace volume becomes the simple summation of the basis volume elements of SE(3). Two new global performance indices, i.e., workspace volume ratio Wr and global condition index GCI, are defined over the complete 6-D workspace. A newly proposed 3 RPPS parallel manipulator is optimized based on this finite-partition approach. As a result, the optimal dimensions for maximal workspace are obtained, and the optimal performance points in the workspace are identified.
Resumo:
Mycobacterium avium subsp. paratuberculosis causes paratuberculosis (Johne's disease) in ruminants in most countries. Historical data suggest substantial differences in culturability of M. avium subsp. paratuberculosis isolates from small ruminants and cattle; however, a systematic comparison of culture media and isolates from different countries and hosts has not been undertaken. Here, 35 field isolates from the United States, Spain, Northern Ireland, and Australia were propagated in Bactec 12B medium and Middlebrook 7H10 agar, genomically characterized, and subcultured to Lowenstein-Jensen (LJ), Herrold's egg yolk (HEY), modified Middlebrook 7H10, Middlebrook 7H11, and Watson-Reid (WR) agars, all with and without mycobactin J and some with sodium pyruvate. Fourteen genotypes of M. avium subsp. paratuberculosis were represented as determined by BstEII IS900 and IS1311 restriction fragment length polymorphism analysis. There was no correlation between genotype and overall culturability, although most S strains tended to grow poorly on HEY agar. Pyruvate was inhibitory to some isolates. All strains grew on modified Middlebrook 7H10 agar but more slowly and less prolifically on LJ agar. Mycobactin J was required for growth on all media except 7H11 agar, but growth was improved by the addition of mycobactin J to 7H11 agar. WR agar supported the growth of few isolates. The differences in growth of M. avium subsp. paratuberculosis that have historically been reported in diverse settings have been strongly influenced by the type of culture medium used. When an optimal culture medium, such as modified Middlebrook 7H10 agar, is used, very little difference between the growth phenotypes of diverse strains of M. avium subsp. paratuberculosis was observed. This optimal medium is recommended to remove bias in the isolation and cultivation of M. avium subsp. paratuberculosis.
Resumo:
The use of accelerators, with compute architectures different and distinct from the CPU, has become a new research frontier in high-performance computing over the past ?ve years. This paper is a case study on how the instruction-level parallelism offered by three accelerator technologies, FPGA, GPU and ClearSpeed, can be exploited in atomic physics. The algorithm studied is the evaluation of two electron integrals, using direct numerical quadrature, a task that arises in the study of intermediate energy electron scattering by hydrogen atoms. The results of our ‘productivity’ study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.
Resumo:
This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.
Resumo:
Voice over IP (VoIP) has experienced a tremendous growth over the last few years and is now widely used among the population and for business purposes. The security of such VoIP systems is often assumed, creating a false sense of privacy. This paper investigates in detail the leakage of information from Skype, a widely used and protected VoIP application. Experiments have shown that isolated phonemes can be classified and given sentences identified. By using the dynamic time warping (DTW) algorithm, frequently used in speech processing, an accuracy of 60% can be reached. The results can be further improved by choosing specific training data and reach an accuracy of 83% under specific conditions. The initial results being speaker dependent, an approach involving the Kalman filter is proposed to extract the kernel of all training signals.
Resumo:
This paper describes the application of an improved nonlinear principal component analysis (PCA) to the detection of faults in polymer extrusion processes. Since the processes are complex in nature and nonlinear relationships exist between the recorded variables, an improved nonlinear PCA, which incorporates the radial basis function (RBF) networks and principal curves, is proposed. This algorithm comprises two stages. The first stage involves the use of the serial principal curve to obtain the nonlinear scores and approximated data. The second stage is to construct two RBF networks using a fast recursive algorithm to solve the topology problem in traditional nonlinear PCA. The benefits of this improvement are demonstrated in the practical application to a polymer extrusion process.
Resumo:
Background Human respiratory syncytial virus (RSV) causes severe respiratory disease in infants. Airway epithelial cells are the principle targets of RSV infection. However, the mechanisms by which it causes disease are poorly understood. Most RSV pathogenesis data are derived using laboratory-adapted prototypic strains. We hypothesized that such strains may be poorly representative of recent clinical isolates in terms of virus/host interactions in primary human bronchial epithelial cells (PBECs). Methods To address this hypothesis, we isolated three RSV strains from infants hospitalized with bronchiolitis and compared them with the prototypic RSV A2 in terms of cytopathology, virus growth kinetics and chemokine secretion in infected PBEC monolayers. Results RSV A2 rapidly obliterated the PBECs, whereas the clinical isolates caused much less cytopathology. Concomitantly, RSV A2 also grew faster and to higher titers in PBECs. Furthermore, dramatically increased secretion of IP-10 and RANTES was evident following A2 infection compared with the clinical isolates. Conclusions The prototypic RSV strain A2 is poorly representative of recent clinical isolates in terms of cytopathogenicity, viral growth kinetics and pro-inflammatory responses induced following infection of PBEC monolayers. Thus, the choice of RSV strain may have important implications for future RSV pathogenesis studies.
Resumo:
A queue manager (QM) is a core traffic management (TM) function used to provide per-flow queuing in access andmetro networks; however current designs have limited scalability. An on-demand QM (OD-QM) which is part of a new modular field-programmable gate-array (FPGA)-based TM is presented that dynamically maps active flows to the available physical resources; its scalability is derived from exploiting the observation that there are only a few hundred active flows in a high speed network. Simulations with real traffic show that it is a scalable, cost-effective approach that enhances per-flow queuing performance, thereby allowing per-flow QM without the need for extra external memory at speeds up to 10 Gbps. It utilizes 2.3%–16.3% of a Xilinx XC5VSX50t FPGA and works at 111 MHz.
Resumo:
A full hardware implementation of a Weighted Fair Queuing (WFQ) packet scheduler is proposed. The circuit architecture presented has been implemented using Altera Stratix II FPGA technology, utilizing RLDII and QDRII memory components. The circuit can provide fine granularity Quality of Service (QoS) support at a line throughput rate of 12.8Gb/s in its current implementation. The authors suggest that, due to the flexible and scalable modular circuit design approach used, the current circuit architecture can be targeted for a full ASIC implementation to deliver 50 Gb/s throughput. The circuit itself comprises three main components; a WFQ algorithm computation circuit, a tag/time-stamp sort and retrieval circuit, and a high throughput shared buffer. The circuit targets the support of emerging wireline and wireless network nodes that focus on Service Level Agreements (SLA's) and Quality of Experience.
Resumo:
Measuring the degree of inconsistency of a belief base is an important issue in many real world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base,whilst eachminimal inconsistent subset projects a distinctive viewof the inconsistency. To address these two issues,we propose a normalized framework formeasuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in equirements engineering to illustrate the application of the normalized framework.
Resumo:
Recent years have witnessed an incredibly increasing interest in the topic of incremental learning. Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time. Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries. Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process. In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance. Detailed system level architecture and design strategies are presented in this paper. Simulation results over several real-world data sets are used to validate the effectiveness of this method.
Resumo:
A service is a remote computational facility which is made available for general use by means of a wide-area network. Several types of service arise in practice: stateless services, shared state services and services with states which are customised for individual users. A service-based orchestration is a multi-threaded computation which invokes remote services in order to deliver results back to a user (publication). In this paper a means of specifying services and reasoning about the correctness of orchestrations over stateless services is presented. As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed. For this reason a partial-correctness powerdomain approach is proposed to capture the semantics of recursive orchestrations.
Resumo:
Quantum-dot Cellular Automata (QCA) technology is a promising potential alternative to CMOS technology. To explore the characteristics of QCA and suitable design methodologies, digital circuit design approaches have been investigated. Due to the inherent wire delay in QCA, pipelined architectures appear to be a particularly suitable design technique. Also, because of the pipeline nature of QCA technology, it is not suitable for complicated control system design. Systolic arrays take advantage of pipelining, parallelism and simple local control. Therefore, an investigation into these architectures in QCA technology is provided in this paper. Two case studies, (a matrix multiplier and a Galois Field multiplier) are designed and analyzed based on both multilayer and coplanar crossings. The performance of these two types of interconnections are compared and it is found that even though coplanar crossings are currently more practical, they tend to occupy a larger design area and incur slightly more delay. A general semi-conductor QCA systolic array design methodology is also proposed. It is found that by applying a systolic array structure in QCA design, significant benefits can be achieved particularly with large systolic arrays, even more so than when applied in CMOS-based technology.
Resumo:
For many years psychological research on facial expression of emotion has relied heavily on a recognition paradigm based on posed static photographs. There is growing evidence that there may be fundamental differences between the expressions depicted in such stimuli and the emotional expressions present in everyday life. Affective computing, with its pragmatic emphasis on realism, needs examples of natural emotion. This paper describes a unique database containing recordings of mild to moderate emotionally coloured responses to a series of laboratory based emotion induction tasks. The recordings are accompanied by information on self-report of emotion and intensity, continuous trace-style ratings of valence and intensity, the sex of the participant, the sex of the experimenter, the active or passive nature of the induction task and it gives researchers the opportunity to compare expressions from people from more than one culture.