943 resultados para Conformal field models in string theory
Resumo:
This paper examines the causal links between fertility and female labor force participation in Bangladesh over the period 1974-2000 by specifying a bivariate and several trivariate models in a vector error correction framework. The three trivariate models alternatively include average age at first marriage for females, per capita GDP and infant mortality rate, which control for the effects of other socio-economic factors on fertility and female labor force participation. All the specified models indicate an inverse long-run relationship between fertility and female labor force participation. While the bivariate model also indicates bidirectional causality, the multivariate models confirm only a unidirectional causality – from labor force participation to fertility. Further, per capita GDP and infant mortality rate appear to Granger-cause both fertility and female labor force participation.
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
A model is introduced for two reduced BCS systems which are coupled through the transfer of Cooper pairs between the systems. The model may thus be used in the analysis of the Josephson effect arising from pair tunneling between two strongly coupled small metallic grains. At a particular coupling strength the model is integrable and explicit results are derived for the energy spectrum, conserved operators, integrals of motion, and wave function scalar products. It is also shown that form factors can be obtained for the calculation of correlation functions. Furthermore, a connection with perturbed conformal field theory is made.
Resumo:
Today, the standard approach for the kinetic analysis of dynamic PET studies is compartment models, in which the tracer and its metabolites are confined to a few well-mixed compartments. We examine whether the standard model is suitable for modern PET data or whether theories including more physiologic realism can advance the interpretation of dynamic PET data. A more detailed microvascular theory is developed for intravascular tracers in single-capillary and multiple-capillary systems. The microvascular models, which account for concentration gradients in capillaries, are validated and compared with the standard model in a pig liver study. Methods: Eight pigs underwent a 5-min dynamic PET study after O-15-carbon monoxide inhalation. Throughout each experiment, hepatic arterial blood and portal venous blood were sampled, and flow was measured with transit-time flow meters. The hepatic dual-inlet concentration was calculated as the flow-weighted inlet concentration. Dynamic PET data were analyzed with a traditional single-compartment model and 2 microvascular models. Results: Microvascular models provided a better fit of the tissue activity of an intravascular tracer than did the compartment model. In particular, the early dynamic phase after a tracer bolus injection was much improved. The regional hepatic blood flow estimates provided by the microvascular models (1.3 +/- 0.3 mL min(-1) mL(-1) for the single-capillary model and 1.14 +/- 0.14 min(-1) mL(-1) for the multiple-capillary model) (mean +/- SEM mL of blood min(-1) mL of liver tissue(-1)) were in agreement with the total blood flow measured by flow meters and normalized to liver weight (1.03 +/- 0.12 mL min(-1) mL(-1)). Conclusion: Compared with the standard compartment model, the 2 microvascular models provide a superior description of tissue activity after an intravascular tracer bolus injection. The microvascular models include only parameters with a clear-cut physiologic interpretation and are applicable to capillary beds in any organ. In this study, the microvascular models were validated for the liver and provided quantitative regional flow estimates in agreement with flow measurements.
Resumo:
Incursions of Japanese encephalitis (JE) virus into northern Queensland are currently monitored using sentinel pigs. However, the maintenance of these pigs is expensive, and because pigs are the major amplifying hosts of the virus, they may contribute to JE transmission. Therefore, we evaluated a mosquito-based detection system to potentially replace the sentinel pigs. Single, inactivated JE-infected Culex annulirostris Skuse and C. sitiens Wiedemann were placed into pools of uninfected mosquitoes that were housed in a Mosquito Magnet Pro (MM) trap set under wet season field conditions in Cairns, Queensland for 0, 7, or 14 d. JE viral RNA was detected (cycling threshold [CT] = 40) in 11/ 12, 10/14, and 2/5 pools containing 200, 1,000, and 5,000 mosquitoes, respectively, using a TaqMan real-time reverse transcription-polymerase chain reaction (RT-PCR). The ability to detect virus was not affected by the length of time pools were maintained under field conditions, although the CT score tended to increase with field exposure time. Furthermore, JE viral RNA was detected in three pools of 1,000 mosquitoes collected from Badu Island using a MM trap. These results indicated that a mosquito trap system employing self-powered traps, such as the MosquitoMagnet, and a real-time PCR system, could be used to monitor for JE in remote areas.
Resumo:
Since collaborative networked organisations are usually formed by independent and heterogeneous entities, it is natural that each member holds his own set of values, and that conflicts among partners might emerge because of some misalignment of values. In contrast, it is often stated in literature that the alignment between the value systems of members involved in collaborative processes is a prerequisite for successful co-working. As a result, the issue of core value alignment in collaborative networks started to attract attention. However, methods to analyse such alignment are lacking mainly because the concept of 'alignment' in this context is still ill defined and shows a multifaceted nature. As a contribution to the area, this article introduces an approach based on causal models and graph theory for the analysis of core value alignment in collaborative networks. The potential application of the approach is then discussed in the virtual organisations' breeding environment context.
Resumo:
The idea of grand unification in a minimal supersymmetric SU(5) x SU(5) framework is revisited. It is shown that the unification of gauge couplings into a unique coupling constant can be achieved at a high-energy scale compatible with proton decay constraints. This requires the addition of minimal particle content at intermediate energy scales. In particular, the introduction of the SU(2)(L) triplets belonging to the (15, 1)+((15) over bar, 1) representations, as well as of the scalar triplet Sigma(3) and octet Sigma(8) in the (24, 1) representation, turns out to be crucial for unification. The masses of these intermediate particles can vary over a wide range, and even lie in the TeV region. In contrast, the exotic vector-like fermions must be heavy enough and have masses above 10(10) GeV. We also show that, if the SU(5) x SU(5) theory is embedded into a heterotic string scenario, it is not possible to achieve gauge coupling unification with gravity at the perturbative string scale.
Resumo:
Considering that recent european high-speed railway system has a traction power system of kV 50 Hz, which causes electromagnetic emission for the outside world, it is important to dimension the railway system emissions, using a frequency/distance dependent propagation model. This paper presents an enhanced theoretical model for VLF to UHF propagation, railway system oriented. It introduces the near field approach (crucial in low frequency propagation) and also considers the source characteristics and type of measuring antenna. Simulations are presented, and comparisons are set with earlier far field models. Using the developed model, a real case study was performed in partnership with Refer Telecom (portuguese telecom operator for railways). The new propagation model was used in order to predict the future high-speed railway electromagnetic emissions in the Lisbon north track. The results show the model's prediction capabilities and also its applicability to realistic scenarios.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores – Sistemas Digitais e Percepcionais pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
The currently used pre-exposure anti-rabies immunization schedule in Brazil is the one called 3+1, employing suckling mouse brain vaccine (3 doses on alternate days and the last one on day 30). Although satisfactory results were obtained in well controlled experimental groups using this immunization schedule, in our routine practice, VNA levels lower than 0.5 IU/ml are frequently found. We studied the pre-exposure 3+1 schedule under field conditions in different cities on the State of São Paulo, Brazil, under variable and sometimes adverse circumstances, such as the use of different batches of vaccine with different titers, delivered, stored and administered under local conditions. Fifty out of 256 serum samples (19.5%) showed VNA titers lower than 0.5 IU/ml, but they were not distributed homogeneously among the localities studied. While in some cities the results were completely satisfactory, in others almost 40% did not attain the minimum VNA titer required. The results presented here, considered separately, question our currently used procedures for human pre-exposure anti-rabies immunization. The reasons determining this situation are discussed.
Resumo:
Fractional Calculus (FC) goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades, due to the progress in the area of chaos that revealed subtle relationships with the FC concepts. In the field of dynamical systems theory some work has been carried out but the proposed models and algorithms are still in a preliminary stage of establishment. Having these ideas in mind, the paper discusses a FC perspective in the study of the dynamics and control of some distributed parameter systems.
Resumo:
Natural disasters are events that cause general and widespread destruction of the built environment and are becoming increasingly recurrent. They are a product of vulnerability and community exposure to natural hazards, generating a multitude of social, economic and cultural issues of which the loss of housing and the subsequent need for shelter is one of its major consequences. Nowadays, numerous factors contribute to increased vulnerability and exposure to natural disasters such as climate change with its impacts felt across the globe and which is currently seen as a worldwide threat to the built environment. The abandonment of disaster-affected areas can also push populations to regions where natural hazards are felt more severely. Although several actors in the post-disaster scenario provide for shelter needs and recovery programs, housing is often inadequate and unable to resist the effects of future natural hazards. Resilient housing is commonly not addressed due to the urgency in sheltering affected populations. However, by neglecting risks of exposure in construction, houses become vulnerable and are likely to be damaged or destroyed in future natural hazard events. That being said it becomes fundamental to include resilience criteria, when it comes to housing, which in turn will allow new houses to better withstand the passage of time and natural disasters, in the safest way possible. This master thesis is intended to provide guiding principles to take towards housing recovery after natural disasters, particularly in the form of flood resilient construction, considering floods are responsible for the largest number of natural disasters. To this purpose, the main structures that house affected populations were identified and analyzed in depth. After assessing the risks and damages that flood events can cause in housing, a methodology was proposed for flood resilient housing models, in which there were identified key criteria that housing should meet. The same methodology is based in the US Federal Emergency Management Agency requirements and recommendations in accordance to specific flood zones. Finally, a case study in Maldives – one of the most vulnerable countries to sea level rise resulting from climate change – has been analyzed in light of housing recovery in a post-disaster induced scenario. This analysis was carried out by using the proposed methodology with the intent of assessing the resilience of the newly built housing to floods in the aftermath of the 2004 Indian Ocean Tsunami.
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
Programa Doutoral em Biologia Molecular e Ambiental
Resumo:
We present a solution to the problem of defining a counterpart in Algebraic Set Theory of the construction of internal sheaves in Topos Theory. Our approach is general in that we consider sheaves as determined by Lawvere-Tierney coverages, rather than by Grothen-dieck coverages, and assume only a weakening of the axioms for small maps originally introduced by Joyal and Moerdijk, thus subsuming the existing topos-theoretic results.