993 resultados para Supersymmetric Effective Theories
Resumo:
This study seeks to understand the prevailing status of Nepalese media portrayal of natural disasters and develop a disaster management framework to improve the effectiveness and efficiency of news production through the continuum of prevention, preparedness, response and recovery (PPRR) phases of disaster management. The study is currently under progress. It is being undertaken in three phases. In phase-1, a qualitative content analysis is conducted. The news contents are categorized in frames as proposed in the 'Framing theory' and pre-defined frames. However, researcher has looked at the theories of the Press, linking to social responsibility theory as it is regarded as the major obligation of the media towards the society. Thereafter, the contents are categorized as per PPRR cycle. In Phase-2, based on the findings of content analysis, 12 in-depth interviews with journalists, disaster managers and community leaders are conducted. In phase-3, based on the findings of content analysis and in-depth interviews, a framework for effective media management of disaster are developed using thematic analysis. As the study is currently under progress hence, findings from the pilot study are elucidated. The response phase of disasters is most commonly reported in Nepal. There is relatively low coverage of preparedness and prevention. Furthermore, the responsibility frame in the news is most prevalent following human interest. Economic consequences and conflict frames are also used while reporting and vulnerability assessment has been used as an additional frame. The outcomes of this study are multifaceted: At the micro-level people will be benefited as it will enable a reduction in the loss of human lives and property through effective dissemination of information in news and other mode of media. They will be ‘well prepared for', 'able to prevent', 'respond to' and 'recover from' any natural disasters. At the meso level the media industry will be benefited and have their own 'disaster management model of news production' as an effective disaster reporting tool which will improve in media's editorial judgment and priority. At the macro-level it will assist government and other agencies to develop appropriate policies and strategies for better management of natural disasters.
Resumo:
"In Perpetual Motion is an "historical choreography" of power, pedagogy, and the child from the 1600s to the early 1900s. It breaks new ground by historicizing the analytics of power and motion that have interpenetrated renditions of the young. Through a detailed examination of the works of John Locke, Jean-Jacques Rousseau, Johann Herbart, and G. Stanley Hall, this book maps the discursive shifts through which the child was given a unique nature, inscribed in relation to reason, imbued with an effectible interiority, and subjected to theories of power and motion. The book illustrates how developmentalist visions took hold in U.S. public school debates. It documents how particular theories of power became submerged and taken for granted as essences inside the human subject. In Perpetual Motion studiously challenges views of power as in or of the gaze, tracing how different analytics of power have been used to theorize what gazing could notice."--BOOK JACKET.
Resumo:
There are essentially two different phenomenological models available to describe the interdiffusion process in binary systems in the olid state. The first of these, which is used more frequently, is based on the theory of flux partitioning. The second model, developed much more recently, uses the theory of dissociation and reaction. Although the theory of flux partitioning has been widely used, we found that this theory does not account for the mobility of both species and therefore is not suitable for use in most interdiffusion systems. We have first modified this theory to take into account the mobility of both species and then further extended it to develop relations or the integrated diffusion coefficient and the ratio of diffusivities of the species. The versatility of these two different models is examined in the Co-Si system with respect to different end-member compositions. From our analysis, we found that the applicability of the theory of flux partitioning is rather limited but the theory of dissociation and reaction can be used in any binary system.
Resumo:
Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.
Resumo:
When designed effectively dashboards are expected to reduce information overload and improve performance management. Hence, interest in dashboards has increased recently,which is also evident from the proliferation of dashboard solution providers in the market. Despite dashboards popularity, little is known about the extent of their effectiveness in organizations. Dashboards draw from multiple disciplines but ultimately use visualization to communicate important information to stakeholders. Thus,a better understanding of visualization can improve the design and use of dashboards. This paper reviews the foundations and roles of dashboards in performance management and proposes a framework for future research, which can enhance dashboard design and perceived usefulness depending on the fit between the features of the dashboard and the characteristics of the users.
Resumo:
Sustainable urban development, a major issue at global scale, will become more relevant according to population growth predictions in developed and developing countries. Societal and international recognition of sustainability concerns led to the development of specific tools and procedures, known as sustainability assessments/appraisals (SA). Their effectiveness however, considering that global quality life indicators have worsened since their introduction, has promoted a re-thinking of SA instruments. More precisely, Strategic Environmental Assessment (SEA), – a tool introduced in the European context to evaluate policies, plans, and programmes (PPPs), – is being reconsidered because of several features that seem to limit its effectiveness. Over time, SEA has evolved in response to external and internal factors dealing with technical, procedural, planning and governance systems thus involving a shift of paradigm from EIA-based SEAs (first generation protocols) towards more integrated approaches (second generation ones). Changes affecting SEA are formalised through legislation in each Member State, to guide institutions at regional and local level. Defining SEA effectiveness is quite difficult. Its’ capacity-building process appears quite far from its conclusion, even if any definitive version can be conceptualized. In this paper, we consider some European nations with different planning systems and SA traditions. After the identification of some analytical criteria, a multi-dimensional cluster analysis is developed on some case studies, to outline current weaknesses.
Resumo:
Sustainable urban development, a major issue at global scale, will become more relevant according to population growth predictions in developed and developing countries. Societal and international recognition of sustainability concerns led to the development of specific tools and procedures, known as sustainability assessments/appraisals (SA). Their effectiveness however, considering that global quality life indicators have worsened since their introduction, has promoted a re-thinking of SA instruments. More precisely, Strategic Environmental Assessment (SEA), – a tool introduced in the European context to evaluate policies, plans, and programmes (PPPs), – is being reconsidered because of several features that seem to limit its effectiveness. Over time, SEA has evolved in response to external and internal factors dealing with technical, procedural, planning and governance systems thus involving a shift of paradigm from EIA-based SEAs (first generation protocols) towards more integrated approaches (second generation ones). Changes affecting SEA are formalised through legislation in each Member State, to guide institutions at regional and local level. Defining SEA effectiveness is quite difficult. Its’ capacity-building process appears quite far from its conclusion, even if any definitive version can be conceptualized. In this paper, we consider some European nations with different planning systems and SA traditions. After the identification of some analytical criteria, a multi-dimensional cluster analysis is developed on some case studies, to outline current weaknesses.
Resumo:
Metal Auger line intensity ratios were shown by Rao and others to be directly related to the occupancy of valence states. It is now shown that these intensity ratios are more generally related to the effective charge on the metal atom. The Auger intensity ratios are also directly proportional to valence band intensities of metals. Correlations of the intensity ratios with Auger parameters have also been examined.
Resumo:
Using the dimensional reduction regularization scheme, we show that radiative corrections to the anomaly of the axial current, which is coupled to the gauge field, are absent in a supersymmetric U(1) gauge model for both 't Hooft-Veltman and Bardeen prescriptions for γ5. We also discuss the results with reference to conventional dimensional regularization. This result has significant implications with respect to the renormalizability of supersymmetric models.
Resumo:
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim–Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Resumo:
We report the results of two studies of aspects of the consistency of truncated nonlinear integral equation based theories of freezing: (i) We show that the self-consistent solutions to these nonlinear equations are unfortunately sensitive to the level of truncation. For the hard sphere system, if the Wertheim–Thiele representation of the pair direct correlation function is used, the inclusion of part but not all of the triplet direct correlation function contribution, as has been common, worsens the predictions considerably. We also show that the convergence of the solutions found, with respect to number of reciprocal lattice vectors kept in the Fourier expansion of the crystal singlet density, is slow. These conclusions imply great sensitivity to the quality of the pair direct correlation function employed in the theory. (ii) We show the direct correlation function based and the pair correlation function based theories of freezing can be cast into a form which requires solution of isomorphous nonlinear integral equations. However, in the pair correlation function theory the usual neglect of the influence of inhomogeneity of the density distribution on the pair correlation function is shown to be inconsistent to the lowest order in the change of density on freezing, and to lead to erroneous predictions. The Journal of Chemical Physics is copyrighted by The American Institute of Physics.
Resumo:
This study examines both theoretically an empirically how well the theories of Norman Holland, David Bleich, Wolfgang Iser and Stanley Fish can explain readers' interpretations of literary texts. The theoretical analysis concentrates on their views on language from the point of view of Wittgenstein's Philosophical Investigations. This analysis shows that many of the assumptions related to language in these theories are problematic. The empirical data show that readers often form very similar interpretations. Thus the study challenges the common assumption that literary interpretations tend to be idiosyncratic. The empirical data consists of freely worded written answers to questions on three short stories. The interpretations were made by 27 Finnish university students. Some of the questions addressed issues that were discussed in large parts of the texts, some referred to issues that were mentioned only in passing or implied. The short stories were "The Witch à la Mode" by D. H. Lawrence, "Rain in the Heart" by Peter Taylor and "The Hitchhiking Game" by Milan Kundera. According to Fish, readers create both the formal features of a text and their interpretation of it according to an interpretive strategy. People who agree form an interpretive community. However, a typical answer usually contains ideas repeated by several readers as well as observations not mentioned by anyone else. Therefore it is very difficult to determine which readers belong to the same interpretive community. Moreover, readers with opposing opinions often seem to pay attention to the same textual features and even acknowledge the possibility of an opposing interpretation; therefore they do not seem to create the formal features of the text in different ways. Iser suggests that an interpretation emerges from the interaction between the text and the reader when the reader determines the implications of the text and in this way fills the "gaps" in the text. Iser believes that the text guides the reader, but as he also believes that meaning is on a level beyond words, he cannot explain how the text directs the reader. The similarity in the interpretations and the fact that the agreement is strongest when related to issues that are discussed broadly in the text do, however, support his assumption that readers are guided by the text. In Bleich's view, all interpretations have personal motives and each person has an idiosyncratic language system. The situation where a person learns a word determines the most important meaning it has for that person. In order to uncover the personal etymologies of words, Bleich asks his readers to associate freely on the basis of a text and note down all the personal memories and feelings that the reading experience evokes. Bleich's theory of the idiosyncratic language system seems to rely on a misconceived notion of the role that ostensive definitions have in language use. The readers' responses show that spontaneous associations to personal life seem to colour the readers' interpretations, but such instances are rather rare. According to Holland, an interpretation reflects the reader's identity theme. Language use is regulated by shared rules, but everyone follows the rules in his or her own way. Words mean different things to different people. The problem with this view is that if there is any basis for language use, it seems to be the shared way of following linguistic rules. Wittgenstein suggests that our understanding of words is related to the shared ways of using words and our understanding of human behaviour. This view seems to give better grounds for understanding similarity and differences in literary interpretations than the theories of Holland, Bleich, Fish and Iser.
Resumo:
Mould growth in field crops or stored grain reduces starch and lipid content, with consequent increases in fibre, and an overall reduction in digestible energy; palatability is often adversely affected. If these factors are allowed for, and mycotoxin concentrations are low, there are sound economic reasons for using this cheaper grain. Mycotoxins are common in stock feed but their effects on animal productivity are usually slight because either the concentration is too low or the animal is tolerant to the toxin. In Australia, aflatoxins occur in peanut by-products and in maize and sorghum if the grain is moist when stored. Zearalenone is found in maize and in sorghum and wheat in wetter regions. Nivalenol and deoxynivalenol are found in maize and wheat but at concentrations that rarely affect pigs, with chickens and cattle being even more tolerant. Other mycotoxins including cyclopiazonic acid, T-2 toxin, cytochalasins and tenuazonic acid are produced by Australian fungi in culture but are not found to be significant grain contaminants. Extremely mouldy sorghum containing Alternaria and Fusarium mycotoxins decreased feed conversion in pigs and chickens by up to 14%. However, E moniliforme- and Diplodia maydis-infected maize produced only slight reductions in feed intake by pigs and Ustilago- infected barley produced no ill effects. Use of these grains would substantially increase profits if the grain can be purchased cheaply.
Resumo:
The development of innovative methods of stock assessment is a priority for State and Commonwealth fisheries agencies. It is driven by the need to facilitate sustainable exploitation of naturally occurring fisheries resources for the current and future economic, social and environmental well being of Australia. This project was initiated in this context and took advantage of considerable recent achievements in genomics that are shaping our comprehension of the DNA of humans and animals. The basic idea behind this project was that genetic estimates of effective population size, which can be made from empirical measurements of genetic drift, were equivalent to estimates of the successful number of spawners that is an important parameter in process of fisheries stock assessment. The broad objectives of this study were to 1. Critically evaluate a variety of mathematical methods of calculating effective spawner numbers (Ne) by a. conducting comprehensive computer simulations, and by b. analysis of empirical data collected from the Moreton Bay population of tiger prawns (P. esculentus). 2. Lay the groundwork for the application of the technology in the northern prawn fishery (NPF). 3. Produce software for the calculation of Ne, and to make it widely available. The project pulled together a range of mathematical models for estimating current effective population size from diverse sources. Some of them had been recently implemented with the latest statistical methods (eg. Bayesian framework Berthier, Beaumont et al. 2002), while others had lower profiles (eg. Pudovkin, Zaykin et al. 1996; Rousset and Raymond 1995). Computer code and later software with a user-friendly interface (NeEstimator) was produced to implement the methods. This was used as a basis for simulation experiments to evaluate the performance of the methods with an individual-based model of a prawn population. Following the guidelines suggested by computer simulations, the tiger prawn population in Moreton Bay (south-east Queensland) was sampled for genetic analysis with eight microsatellite loci in three successive spring spawning seasons in 2001, 2002 and 2003. As predicted by the simulations, the estimates had non-infinite upper confidence limits, which is a major achievement for the application of the method to a naturally-occurring, short generation, highly fecund invertebrate species. The genetic estimate of the number of successful spawners was around 1000 individuals in two consecutive years. This contrasts with about 500,000 prawns participating in spawning. It is not possible to distinguish successful from non-successful spawners so we suggest a high level of protection for the entire spawning population. We interpret the difference in numbers between successful and non-successful spawners as a large variation in the number of offspring per family that survive – a large number of families have no surviving offspring, while a few have a large number. We explored various ways in which Ne can be useful in fisheries management. It can be a surrogate for spawning population size, assuming the ratio between Ne and spawning population size has been previously calculated for that species. Alternatively, it can be a surrogate for recruitment, again assuming that the ratio between Ne and recruitment has been previously determined. The number of species that can be analysed in this way, however, is likely to be small because of species-specific life history requirements that need to be satisfied for accuracy. The most universal approach would be to integrate Ne with spawning stock-recruitment models, so that these models are more accurate when applied to fisheries populations. A pathway to achieve this was established in this project, which we predict will significantly improve fisheries sustainability in the future. Regardless of the success of integrating Ne into spawning stock-recruitment models, Ne could be used as a fisheries monitoring tool. Declines in spawning stock size or increases in natural or harvest mortality would be reflected by a decline in Ne. This would be good for data-poor fisheries and provides fishery independent information, however, we suggest a species-by-species approach. Some species may be too numerous or experiencing too much migration for the method to work. During the project two important theoretical studies of the simultaneous estimation of effective population size and migration were published (Vitalis and Couvet 2001b; Wang and Whitlock 2003). These methods, combined with collection of preliminary genetic data from the tiger prawn population in southern Gulf of Carpentaria population and a computer simulation study that evaluated the effect of differing reproductive strategies on genetic estimates, suggest that this technology could make an important contribution to the stock assessment process in the northern prawn fishery (NPF). Advances in the genomics world are rapid and already a cheaper, more reliable substitute for microsatellite loci in this technology is available. Digital data from single nucleotide polymorphisms (SNPs) are likely to super cede ‘analogue’ microsatellite data, making it cheaper and easier to apply the method to species with large population sizes.