961 resultados para Isotropic and Anisotropic models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the framework of classic electromagnetic theories, we have studied the sign of refractive index of optical medias with the emphases on the roles of the electric and magnetic losses and gains. Starting from the Maxwell equations for an isotropic and homogeneous media, we have derived the general form of the complex refractive index and its relation with the complex electric permittivity and magnetic permeability, i.e. n = root epsilon mu, in which the intrinsic electric and magnetic losses and gains are included as the imaginary parts of the complex permittivity and permeability, respectively, as epsilon = epsilon(r) + i(epsilon i) and mu = mu(r) + i mu(i). The electric and magnetic losses are present in all passive materials, which correspond, respectively, to the positive imaginary permittivity and permeability epsilon(i) > 0 and mu(i) > 0. The electric and magnetic gains are present in materials where external pumping sources enable the light to be amplified instead of attenuated, which correspond, respectively, to the negative imaginary permittivity and permeability epsilon(i) < 0 and mu(i) < 0. We have analyzed and determined uniquely the sign of the refractive index, for all possible combinations of the four parameters epsilon(r), mu(r), epsilon(i), and mu(i), in light of the relativistic causality. A causal solution requires that the wave impedance be positive Re {Z} > 0. We illustrate the results for all cases in tables of the sign of refractive index. One of the most important messages from the sign tables is that, apart from the well-known case where simultaneously epsilon < 0 and mu < 0, there are other possibilities for the refractive index to be negative n < 0, for example, for epsilon(r) < 0, mu(r) > 0, epsilon(i) > 0, and mu(i) > 0, the refractive index is negative n < 0 provided mu(i)/epsilon(i) > mu(r)/vertical bar epsilon(r)vertical bar. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last two decades, analysis of 1/f noise in cognitive science has led to a considerable progress in the way we understand the organization of our mental life. However, there is still a lack of specific models providing explanations of how 1/f noise is generated in coupled brain-body-environment systems, since existing models and experiments typically target either externally observable behaviour or isolated neuronal systems but do not address the interplay between neuronal mechanisms and sensorimotor dynamics. We present a conceptual model of a minimal neurorobotic agent solving a behavioural task that makes it possible to relate mechanistic (neurodynamic) and behavioural levels of description. The model consists of a simulated robot controlled by a network of Kuramoto oscillators with homeostatic plasticity and the ability to develop behavioural preferences mediated by sensorimotor patterns. With only three oscillators, this simple model displays self-organized criticality in the form of robust 1/f noise and a wide multifractal spectrum. We show that the emergence of self-organized criticality and 1/f noise in our model is the result of three simultaneous conditions: a) non-linear interaction dynamics capable of generating stable collective patterns, b) internal plastic mechanisms modulating the sensorimotor flows, and c) strong sensorimotor coupling with the environment that induces transient metastable neurodynamic regimes. We carry out a number of experiments to show that both synaptic plasticity and strong sensorimotor coupling play a necessary role, as constituents of self-organized criticality, in the generation of 1/f noise. The experiments also shown to be useful to test the robustness of 1/f scaling comparing the results of different techniques. We finally discuss the role of conceptual models as mediators between nomothetic and mechanistic models and how they can inform future experimental research where self-organized critically includes sensorimotor coupling among the essential interaction-dominant process giving rise to 1/f noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we show the results obtained applying a Unified Dark Matter (UDM) model with a fast transition to a set of cosmological data. Two different functions to model the transition are tested, and the feasibility of both models is explored using CMB shift data from Planck [1], Galaxy Clustering data from [2] and [3], and Union2.1 SNe Ia [4]. These new models are also statistically compared with the ACDM and quiessence models using Bayes factor through evidence. Bayesian inference does not discard the UDM models in favor of ACDM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This review will focus on the possibility that the cerebellum contains an internal model or models of the motor apparatus. Inverse internal models can provide the neural command necessary to achieve some desired trajectory. First, we review the necessity of such a model and the evidence, based on the ocular following response, that inverse models are found within the cerebellar circuitry. Forward internal models predict the consequences of actions and can be used to overcome time delays associated with feedback control. Secondly, we review the evidence that the cerebellum generates predictions using such a forward model. Finally, we review a computational model that includes multiple paired forward and inverse models and show how such an arrangement can be advantageous for motor learning and control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Red snapper (Lutjanus campechanus) in the United States waters of the Gulf of Mexico (GOM) has been considered a single unit stock since management of the species began in 1991. The validity of this assumption is essential to management decisions because measures of growth can differ for nonmixing populations. We examined growth rates, size-at-age, and length and weight information of red snapper collected from the recreational harvests of Alabama (n=2010), Louisiana (n=1905), and Texas (n =1277) from 1999 to 2001. Ages were obtained from 5035 otolith sections and ranged from one to 45 years. Fork length, total weight, and age-frequency distributions differed significantly among all states; Texas, however, had a much higher proportion of smaller, younger fish. All red snapper showed rapid growth until about age 10 years, after which growth slowed considerably. Von Bertalanffy growth models of both mean fork length and mean total weight-at-age predicted significantly smaller fish at age from Texas, whereas no differences were found between Alabama and Louisiana models. Texas red snapper were also shown to differ significantly from both Alabama and Louisiana red snapper in regressions of mean weight at age. Demographic variation in growth rates may indicate the existence of separate management units of red snapper in the GOM. Our data indicate that the red snapper inhabiting the waters off Texas are reaching smaller maximum sizes at a faster rate and have a consistently smaller total weight at age than those collected from Louisiana and Alabama waters. Whether these differences are environmentally induced or are the result of genetic divergence remains to be determined, but they should be considered for future management regulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are two groups of factors, namely fishery independent factors such as current, temperature and salinity and fishery dependent factors such as types of fishing, namely trawling, gill netting etc. with different mesh sizes and intensity of fishing indicating the number of units of each type of fishing. Hence assessment of capture fishery resources remains a puzzle even today. However, attempts have been made to develop suitable mathematical and statistical models for assessing them and for offering suggestions for judicious management of the resources. This paper indicates in brief the important characteristics of the capture fisheries, their assessment and management with particular reference to India.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L1 methods, in this paper we find that L1 regularisation often dramatically underperforms in terms of predictive performance when compared with other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L1 minimising factor models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner and avoiding unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel strand models for base sequences d(A)(10). d(T)(10), d(AT)(5) . d(TA)(5), d(G(5)C(5)). d(C(5)G(5)), d(GC)(5) . d(CG)(5) and d(CTATAGGGAT). d(GATATCCCTA), where reverse Watson-Crick A-T pairing with two H-bonds and reverse Watson-Crick G-C pairing with one H-bond or with two H-bonds were adopted, and three models of d(T)(14). d(A)(14). d(T)(14) triple helix with different strand orientations were built up by molecular architecture and energy minimization. Comparisons of parallel duplex models with their corresponding B-DNA models and comparisons among the three triple helices showed: (i) conformational energies of parallel AT duplex models were a little lower, while for GC duplex models they were about 8% higher than that of their corresponding B-DNA models; (ii) the energy differences between parallel and B-type duplex models and among the three triple helices arose mainly from base stacking energies, especially for GC base pairing; (iii) the parallel duplexes with one H-bond G-C pairs were less stable than those with two H-bonds G-C pairs. The present paper includes a brief discussion about the effect of base stacking and base sequences on DNA conformations. (C) 1997 Academic Press Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of a semiconducting carbon nanotube (CNT) is assessed and tabulated for parameters against those of a metal-oxide-semiconductor field-effect transistor (MOSFET). Both CNT and MOSFET models considered agree well with the trends in the available experimental data. The results obtained show that nanotubes can significantly reduce the drain-induced barrier lowering effect and subthreshold swing in silicon channel replacement while sustaining smaller channel area at higher current density. Performance metrics of both devices such as current drive strength, current on-off ratio (Ion/Ioff), energy-delay product, and power-delay product for logic gates, namely NAND and NOR, are presented. Design rules used for carbon nanotube field-effect transistors (CNTFETs) are compatible with the 45-nm MOSFET technology. The parasitics associated with interconnects are also incorporated in the model. Interconnects can affect the propagation delay in a CNTFET. Smaller length interconnects result in higher cutoff frequency. © 2012 Tan et al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research is to provide a unified modelling-based method to help with the evaluation of organization design and change decisions. Relevant literature regarding model-driven organization design and change is described. This helps identify the requirements for a new modelling methodology. Such a methodology is developed and described. The three phases of the developed method include the following. First, the use of CIMOSA-based multi-perspective enterprise modelling to understand and capture the most enduring characteristics of process-oriented organizations and externalize various types of requirement knowledge about any target organization. Second, the use of causal loop diagrams to identify dynamic causal impacts and effects related to the issues and constraints on the organization under study. Third, the use of simulation modelling to quantify the effects of each issue in terms of organizational performance. The design and case study application of a unified modelling method based on CIMOSA (computer integrated manufacturing open systems architecture) enterprise modelling, causal loop diagrams, and simulation modelling, is explored to illustrate its potential to support systematic organization design and change. Further application of the proposed methodology in various company and industry sectors, especially in manufacturing sectors, would be helpful to illustrate complementary uses and relative benefits and drawbacks of the methodology in different types of organization. The proposed unified modelling-based method provides a systematic way of enabling key aspects of organization design and change. The case company, its relevant data, and developed models help to explore and validate the proposed method. The application of CIMOSA-based unified modelling method and integrated application of these three modelling techniques within a single solution space constitutes an advance on previous best practice. Also, the purpose and application domain of the proposed method offers an addition to knowledge. © IMechE 2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atlases and statistical models play important roles in the personalization and simulation of cardiac physiology. For the study of the heart, however, the construction of comprehensive atlases and spatio-temporal models is faced with a number of challenges, in particular the need to handle large and highly variable image datasets, the multi-region nature of the heart, and the presence of complex as well as small cardiovascular structures. In this paper, we present a detailed atlas and spatio-temporal statistical model of the human heart based on a large population of 3D+time multi-slice computed tomography sequences, and the framework for its construction. It uses spatial normalization based on nonrigid image registration to synthesize a population mean image and establish the spatial relationships between the mean and the subjects in the population. Temporal image registration is then applied to resolve each subject-specific cardiac motion and the resulting transformations are used to warp a surface mesh representation of the atlas to fit the images of the remaining cardiac phases in each subject. Subsequently, we demonstrate the construction of a spatio-temporal statistical model of shape such that the inter-subject and dynamic sources of variation are suitably separated. The framework is applied to a 3D+time data set of 138 subjects. The data is drawn from a variety of pathologies, which benefits its generalization to new subjects and physiological studies. The obtained level of detail and the extendability of the atlas present an advantage over most cardiac models published previously. © 1982-2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is shown that a new mixed nonlinear/eddy viscosity LES model reproduces profiles better than a number of competing nonlinear and mixed models for plane channel flow. The objective is an LES method that produces a fully resolved turbulent boundary layer and could be applied to a variety of aerospace problems that are currently studied with RANS, RANS-LES, or DES methods that lack a true turbulent boundary layer. There are two components to the new model. One an eddy viscosity based upon the advected subgrid scale energy and a relatively small coefficient. Second, filtered nonlinear terms based upon the Leray regularization. Coefficients for the eddy viscosity and nonlinear terms come from LES tests in decaying, isotropic turbulence. Using these coefficients, the velocity profile matches measurements data at Reτ ≈ 1000 exactly. Profiles of the components of kinetic energy have the same shape as in the experiment, but the magnitudes differ by about 25%. None of the competing LES gets the shape correct. This method does not require extra operations at the transition between the boundary layer and the interior flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses road damage caused by heavy commercial vehicles. Chapter 1 presents some important terminology and a brief historical review of road construction and vehicle-road interaction, from ancient times to the present day. The main types of vehicle-generated road damage, and the methods that are used by pavement engineers to analyze them are discussed in Chapter 2. Attention is also given to the main features of the response of road surfaces to vehicle loads and mathematical models that have been developed to predict road response. Chapter 3 reviews the effects on road damage of vehicle features which can be studied without consideration of vehicle dynamics. These include gross vehicle weight, axle and tire configurations, tire contact conditions and static load sharing in axle group suspensions. The dynamic tire forces generated by heavy vehicles are examined in Chapter 4. The discussion includes their simulation and measurement, their principal characteristics, the effects of tires and suspension design on dynamic forces, and the potential benefits of using advanced suspensions for minimizing dynamic tire forces. Chapter 5 discusses methods for estimating the effects of dynamic tire forces on road damage. The two main approaches are either to examine the statistics of the forces themselves; or to calculate the response of a pavement model to the forces, and to calculate the resulting wear using a material damage model. The issues involved in assessing vehicles for 'road friendliness' are discussed in Chapter 6. Possible assessment methods include measuring strains in an instrumented pavement traversed by the vehicle, measuring dynamic tire forces, or measuring vehicle parameters such as the 'natural frequency' and 'damping ratio'. Each of these measurements involves different assumptions and analysis methods for converting the results into some measure of road damage. Chapter 7 includes a summary of the main conclusions of the paper and recommendations for tire and suspension design, road design and construction, and for vehicle regulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large margin criteria and discriminative models are two effective improvements for HMM-based speech recognition. This paper proposed a large margin trained log linear model with kernels for CSR. To avoid explicitly computing in the high dimensional feature space and to achieve the nonlinear decision boundaries, a kernel based training and decoding framework is proposed in this work. To make the system robust to noise a kernel adaptation scheme is also presented. Previous work in this area is extended in two directions. First, most kernels for CSR focus on measuring the similarity between two observation sequences. The proposed joint kernels defined a similarity between two observation-label sequence pairs on the sentence level. Second, this paper addresses how to efficiently employ kernels in large margin training and decoding with lattices. To the best of our knowledge, this is the first attempt at using large margin kernel-based log linear models for CSR. The model is evaluated on a noise corrupted continuous digit task: AURORA 2.0. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.