28 resultados para Compactification and String Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An enhanced physical model of the bowed string presented previously [1] is explored. It takes into account: the width of the bow, the angular motion of the string, bow-hair elasticity and string bending stiffness. The results of an analytical investigation of a model system - an infinite string sticking to a bow of finite width and driven on one side of the bow - are compared with experimental results published by Cremer [2] and reinterpreted here. Comparison shows that both the width of the bow and the bow-hair elasticity have a large impact on the reflection and transmission behaviour. In general, bending stiffness plays a minor role. Furthermore, a method of numerical simulation of the stiff string bowed with a bow of finite width is presented along with some preliminary results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This review will focus on the possibility that the cerebellum contains an internal model or models of the motor apparatus. Inverse internal models can provide the neural command necessary to achieve some desired trajectory. First, we review the necessity of such a model and the evidence, based on the ocular following response, that inverse models are found within the cerebellar circuitry. Forward internal models predict the consequences of actions and can be used to overcome time delays associated with feedback control. Secondly, we review the evidence that the cerebellum generates predictions using such a forward model. Finally, we review a computational model that includes multiple paired forward and inverse models and show how such an arrangement can be advantageous for motor learning and control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L1 methods, in this paper we find that L1 regularisation often dramatically underperforms in terms of predictive performance when compared with other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L1 minimising factor models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner and avoiding unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a semiconducting carbon nanotube (CNT) is assessed and tabulated for parameters against those of a metal-oxide-semiconductor field-effect transistor (MOSFET). Both CNT and MOSFET models considered agree well with the trends in the available experimental data. The results obtained show that nanotubes can significantly reduce the drain-induced barrier lowering effect and subthreshold swing in silicon channel replacement while sustaining smaller channel area at higher current density. Performance metrics of both devices such as current drive strength, current on-off ratio (Ion/Ioff), energy-delay product, and power-delay product for logic gates, namely NAND and NOR, are presented. Design rules used for carbon nanotube field-effect transistors (CNTFETs) are compatible with the 45-nm MOSFET technology. The parasitics associated with interconnects are also incorporated in the model. Interconnects can affect the propagation delay in a CNTFET. Smaller length interconnects result in higher cutoff frequency. © 2012 Tan et al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research is to provide a unified modelling-based method to help with the evaluation of organization design and change decisions. Relevant literature regarding model-driven organization design and change is described. This helps identify the requirements for a new modelling methodology. Such a methodology is developed and described. The three phases of the developed method include the following. First, the use of CIMOSA-based multi-perspective enterprise modelling to understand and capture the most enduring characteristics of process-oriented organizations and externalize various types of requirement knowledge about any target organization. Second, the use of causal loop diagrams to identify dynamic causal impacts and effects related to the issues and constraints on the organization under study. Third, the use of simulation modelling to quantify the effects of each issue in terms of organizational performance. The design and case study application of a unified modelling method based on CIMOSA (computer integrated manufacturing open systems architecture) enterprise modelling, causal loop diagrams, and simulation modelling, is explored to illustrate its potential to support systematic organization design and change. Further application of the proposed methodology in various company and industry sectors, especially in manufacturing sectors, would be helpful to illustrate complementary uses and relative benefits and drawbacks of the methodology in different types of organization. The proposed unified modelling-based method provides a systematic way of enabling key aspects of organization design and change. The case company, its relevant data, and developed models help to explore and validate the proposed method. The application of CIMOSA-based unified modelling method and integrated application of these three modelling techniques within a single solution space constitutes an advance on previous best practice. Also, the purpose and application domain of the proposed method offers an addition to knowledge. © IMechE 2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atlases and statistical models play important roles in the personalization and simulation of cardiac physiology. For the study of the heart, however, the construction of comprehensive atlases and spatio-temporal models is faced with a number of challenges, in particular the need to handle large and highly variable image datasets, the multi-region nature of the heart, and the presence of complex as well as small cardiovascular structures. In this paper, we present a detailed atlas and spatio-temporal statistical model of the human heart based on a large population of 3D+time multi-slice computed tomography sequences, and the framework for its construction. It uses spatial normalization based on nonrigid image registration to synthesize a population mean image and establish the spatial relationships between the mean and the subjects in the population. Temporal image registration is then applied to resolve each subject-specific cardiac motion and the resulting transformations are used to warp a surface mesh representation of the atlas to fit the images of the remaining cardiac phases in each subject. Subsequently, we demonstrate the construction of a spatio-temporal statistical model of shape such that the inter-subject and dynamic sources of variation are suitably separated. The framework is applied to a 3D+time data set of 138 subjects. The data is drawn from a variety of pathologies, which benefits its generalization to new subjects and physiological studies. The obtained level of detail and the extendability of the atlas present an advantage over most cardiac models published previously. © 1982-2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses road damage caused by heavy commercial vehicles. Chapter 1 presents some important terminology and a brief historical review of road construction and vehicle-road interaction, from ancient times to the present day. The main types of vehicle-generated road damage, and the methods that are used by pavement engineers to analyze them are discussed in Chapter 2. Attention is also given to the main features of the response of road surfaces to vehicle loads and mathematical models that have been developed to predict road response. Chapter 3 reviews the effects on road damage of vehicle features which can be studied without consideration of vehicle dynamics. These include gross vehicle weight, axle and tire configurations, tire contact conditions and static load sharing in axle group suspensions. The dynamic tire forces generated by heavy vehicles are examined in Chapter 4. The discussion includes their simulation and measurement, their principal characteristics, the effects of tires and suspension design on dynamic forces, and the potential benefits of using advanced suspensions for minimizing dynamic tire forces. Chapter 5 discusses methods for estimating the effects of dynamic tire forces on road damage. The two main approaches are either to examine the statistics of the forces themselves; or to calculate the response of a pavement model to the forces, and to calculate the resulting wear using a material damage model. The issues involved in assessing vehicles for 'road friendliness' are discussed in Chapter 6. Possible assessment methods include measuring strains in an instrumented pavement traversed by the vehicle, measuring dynamic tire forces, or measuring vehicle parameters such as the 'natural frequency' and 'damping ratio'. Each of these measurements involves different assumptions and analysis methods for converting the results into some measure of road damage. Chapter 7 includes a summary of the main conclusions of the paper and recommendations for tire and suspension design, road design and construction, and for vehicle regulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large margin criteria and discriminative models are two effective improvements for HMM-based speech recognition. This paper proposed a large margin trained log linear model with kernels for CSR. To avoid explicitly computing in the high dimensional feature space and to achieve the nonlinear decision boundaries, a kernel based training and decoding framework is proposed in this work. To make the system robust to noise a kernel adaptation scheme is also presented. Previous work in this area is extended in two directions. First, most kernels for CSR focus on measuring the similarity between two observation sequences. The proposed joint kernels defined a similarity between two observation-label sequence pairs on the sentence level. Second, this paper addresses how to efficiently employ kernels in large margin training and decoding with lattices. To the best of our knowledge, this is the first attempt at using large margin kernel-based log linear models for CSR. The model is evaluated on a noise corrupted continuous digit task: AURORA 2.0. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares a number of different moment-curvature models for cracked concrete sections that contain both steel and external fiber-reinforced polymer (FRP) reinforcement. The question of whether to use a whole-section analysis or one that considers the FRP separately is discussed. Five existing and three new models are compared with test data for moment-curvature or load deflection behavior, and five models are compared with test results for plate-end debonding using a global energy balance approach (GEBA). A proposal is made for the use of one of the simplified models. The availability of a simplified model opens the way to the production of design aids so that the GEBA can be made available to practicing engineers through design guides and parametric studies. Copyright © 2014, American Concrete Institute.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate a new method for extracting high-level scene information from the type of data available from simultaneous localisation and mapping systems. We model the scene with a collection of primitives (such as bounded planes), and make explicit use of both visible and occluded points in order to refine the model. Since our formulation allows for different kinds of primitives and an arbitrary number of each, we use Bayesian model evidence to compare very different models on an even footing. Additionally, by making use of Bayesian techniques we can also avoid explicitly finding the optimal assignment of map landmarks to primitives. The results show that explicit reasoning about occlusion improves model accuracy and yields models which are suitable for aiding data association. © 2011. The copyright of this document resides with its authors.