884 resultados para unified theories and models of strong and electroweak


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sitophilus oryzae (Linnaeus) is a major pest of stored grain across Southeast Asia and is of increasing concern in other regions due to the advent of strong resistance to phosphine, the fumigant used to protect stored grain from pest insects. We investigated the inheritance of genes controlling resistance to phosphine in a strongly resistant S. oryzae strain (NNSO7525) collected in Australia and find that the trait is autosomally inherited and incompletely recessive with a degree of dominance of -0.66. The strongly resistant strain has an LC50 52 times greater than a susceptible reference strain (LS2) and 9 times greater than a weakly resistant strain (QSO335). Analysis of F2 and backcross progeny indicates that two or more genes are responsible for strong resistance, and that one of these genes, designated Sorph1, not only contributes to strong resistance, but is also responsible for the weak resistance phenotype of strain QSO335. These results demonstrate that the genetic mechanism of phosphine resistance in Soryzae is similar to that of other stored product insect pests. A unique observation is that a subset of the progeny of an F1 backcross generation are more strongly resistant to phosphine than the parental strongly resistant strain, which may be caused by multiple alleles of one of the resistance genes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the so-called ‘wicked problems’ confronting most nations is poverty, or the unequal distribution of resources. This problem is perennial, but how, where and with which physical, psychological, social and educational effects, and for which students (and their teachers), needs continual scrutiny. Poverty is relative. Entire populations may be poor or groups of people and individuals within nations may be poor. Poverty results from injustice. Not only the un- and under-employed are living in poverty, but also the ‘working poor’. Now we see affluent societies with growing pockets of persistent poverty. While there are those who dispute the statistics on the rise of poverty because different nations use different measures (for example see Biddle, 2013; http://theconversation.com/factcheck-is-poverty-on-the-rise-in-australia-17512), there seems to be little dispute that the gaps between the richest and the poorest are increasing (see http://www.stanford.edu/group/scspi/sotu/SOTU_2014_CPI.pdf)...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite positive testing in animal studies, more than 80% of novel drug candidates fail to proof their efficacy when tested in humans. This is primarily due to the use of preclinical models that are not able to recapitulate the physiological or pathological processes in humans. Hence, one of the key challenges in the field of translational medicine is to “make the model organism mouse more human.” To get answers to questions that would be prognostic of outcomes in human medicine, the mouse's genome can be altered in order to create a more permissive host that allows the engraftment of human cell systems. It has been shown in the past that these strategies can improve our understanding of tumor immunology. However, the translational benefits of these platforms have still to be proven. In the 21st century, several research groups and consortia around the world take up the challenge to improve our understanding of how to humanize the animal's genetic code, its cells and, based on tissue engineering principles, its extracellular microenvironment, its tissues, or entire organs with the ultimate goal to foster the translation of new therapeutic strategies from bench to bedside. This article provides an overview of the state of the art of humanized models of tumor immunology and highlights future developments in the field such as the application of tissue engineering and regenerative medicine strategies to further enhance humanized murine model systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines different ways in which the concept of media pluralism has been theorized and used in contemporary media policy debates. Access to a broad range of different political views and cultural expressions is often regarded as a self-evident value in both theoretical and political debates on media and democracy. Opinions on the meaning and nature of media pluralism as a theoretical, political or empirical concept, however, are many, and it can easily be adjusted to different political purposes. The study aims to analyse the ambiguities surrounding the concept of media pluralism in two ways: by deconstructing its normative roots from the perspective of democratic theory, and by examining its different uses, definitions and underlying rationalities in current European media policy debates. The first part of the study examines the values and assumptions behind the notion of media pluralism in the context of different theories of democracy and the public sphere. The second part then analyses and assesses the deployment of the concept in contemporary European policy debates on media ownership and public service media. Finally, the study critically evaluates various attempts to create empirical indicators for measuring media pluralism and discusses their normative implications and underlying rationalities. The analysis of contemporary policy debates indicates that the notion of media pluralism has been too readily reduced to an empty catchphrase or conflated with consumer choice and market competition. In this narrow technocratic logic, pluralism is often unreflectively associated with quantitative data in a way that leaves unexamined key questions about social and political values, democracy, and citizenship. The basic argument advanced in the study is that media pluralism needs to be rescued from its depoliticized uses and re-imagined more broadly as a normative value that refers to the distribution of communicative power in the public sphere. Instead of something that could simply be measured through the number of media outlets available, the study argues that media pluralism should be understood in terms of its ability to challenge inequalities in communicative power and create a more democratic public sphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study effective models of chiral fields and Polyakov loop expected to describe the dynamics responsible for the phase structure of two-flavor QCD at finite temperature and density. We consider chiral sector described either using linear sigma model or Nambu-Jona-Lasinio model and study the phase diagram and determine the location of the critical point as a function of the explicit chiral symmetry breaking (i.e. the bare quark mass $m_q$). We also discuss the possible emergence of the quarkyonic phase in this model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A conceptually unifying and flexible approach to the ABC and FGH segments of the nortriterpenoid rubrifloradilactone C, each embodying a furo[3,2-b]furanone moiety, from the appropriate Morita-Baylis-Hillman adducts is delineated. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider numerical solutions of nonlinear multiterm fractional integrodifferential equations, where the order of the highest derivative is fractional and positive but is otherwise arbitrary. Here, we extend and unify our previous work, where a Galerkin method was developed for efficiently approximating fractional order operators and where elements of the present differential algebraic equation (DAE) formulation were introduced. The DAE system developed here for arbitrary orders of the fractional derivative includes an added block of equations for each fractional order operator, as well as forcing terms arising from nonzero initial conditions. We motivate and explain the structure of the DAE in detail. We explain how nonzero initial conditions should be incorporated within the approximation. We point out that our approach approximates the system and not a specific solution. Consequently, some questions not easily accessible to solvers of initial value problems, such as stability analyses, can be tackled using our approach. Numerical examples show excellent accuracy. DOI: 10.1115/1.4002516]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show by numerical simulations that discretized versions of commonly studied continuum nonlinear growth equations (such as the Kardar-Parisi-Zhangequation and the Lai-Das Sarma-Villain equation) and related atomistic models of epitaxial growth have a generic instability in which isolated pillars (or grooves) on an otherwise flat interface grow in time when their height (or depth) exceeds a critical value. Depending on the details of the model, the instability found in the discretized version may or may not be present in the truly continuum growth equation, indicating that the behavior of discretized nonlinear growth equations may be very different from that of their continuum counterparts. This instability can be controlled either by the introduction of higher-order nonlinear terms with appropriate coefficients or by restricting the growth of pillars (or grooves) by other means. A number of such ''controlled instability'' models are studied by simulation. For appropriate choice of the parameters used for controlling the instability, these models exhibit intermittent behavior, characterized by multiexponent scaling of height fluctuations, over the time interval during which the instability is active. The behavior found in this regime is very similar to the ''turbulent'' behavior observed in recent simulations of several one- and two-dimensional atomistic models of epitaxial growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An attempt is made to study the two dimensional (2D) effective electron mass (EEM) in quantum wells (Qws), inversion layers (ILs) and NIPI superlattices of Kane type semiconductors in the presence of strong external photoexcitation on the basis of a newly formulated electron dispersion laws within the framework of k.p. formalism. It has been found, taking InAs and InSb as examples, that the EEM in Qws, ILs and superlattices increases with increasing concentration, light intensity and wavelength of the incident light waves, respectively and the numerical magnitudes in each case is band structure dependent. The EEM in ILs is quantum number dependent exhibiting quantum jumps for specified values of the surface electric field and in NIPI superlattices; the same is the function of Fermi energy and the subband index characterizing such 2D structures. The appearance of the humps of the respective curves is due to the redistribution of the electrons among the quantized energy levels when the quantum numbers corresponding to the highest occupied level changes from one fixed value to the others. Although the EEM varies in various manners with all the variables as evident from all the curves, the rates of variations totally depend on the specific dispersion relation of the particular 2D structure. Under certain limiting conditions, all the results as derived in this paper get transformed into well known formulas of the EEM and the electron statistics in the absence of external photo-excitation and thus confirming the compatibility test. The results of this paper find three applications in the field of microstructures. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automated image segmentation techniques are useful tools in biological image analysis and are an essential step in tracking applications. Typically, snakes or active contours are used for segmentation and they evolve under the influence of certain internal and external forces. Recently, a new class of shape-specific active contours have been introduced, which are known as Snakuscules and Ovuscules. These contours are based on a pair of concentric circles and ellipses as the shape templates, and the optimization is carried out by maximizing a contrast function between the outer and inner templates. In this paper, we present a unified approach to the formulation and optimization of Snakuscules and Ovuscules by considering a specific form of affine transformations acting on a pair of concentric circles. We show how the parameters of the affine transformation may be optimized for, to generate either Snakuscules or Ovuscules. Our approach allows for a unified formulation and relies only on generic regularization terms and not shape-specific regularization functions. We show how the calculations of the partial derivatives may be made efficient thanks to the Green's theorem. Results on synthesized as well as real data are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a comparative evaluation of the average and switching models of a dc-dc boost converter from the point of view of real-time simulation. Both the models are used to simulate the converter in real-time on a Field Programmable Gate Array (FPGA) platform. The converter is considered to function over a wide range of operating conditions, and could do transition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM). While the average model is known to be computationally efficient from the perspective of off-line simulation, the same is shown here to consume more logical resources than the switching model for real-time simulation of the dc-dc converter. Further, evaluation of the boundary condition between CCM and DCM is found to be the main reason for the increased consumption of resources by the average model.