966 resultados para Existence of solution


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with exact solutions of Einstein's field equations of general relativity, in particular, when the source of the gravitational field is a perfect fluid with a purely electric Weyl tensor. General relativity, cosmology and computer algebra are discussed briefly. A mathematical introduction to Riemannian geometry and the tetrad formalism is then given. This is followed by a review of some previous results and known solutions concerning purely electric perfect fluids. In addition, some orthonormal and null tetrad equations of the Ricci and Bianchi identities are displayed in a form suitable for investigating these space-times. Conformally flat perfect fluids are characterised by the vanishing of the Weyl tensor and form a sub-class of the purely electric fields in which all solutions are known (Stephani 1967). The number of Killing vectors in these space-times is investigated and results presented for the non-expanding space-times. The existence of stationary fields that may also admit 0, 1 or 3 spacelike Killing vectors is demonstrated. Shear-free fluids in the class under consideration are shown to be either non-expanding or irrotational (Collins 1984) using both orthonormal and null tetrads. A discrepancy between Collins (1984) and Wolf (1986) is resolved by explicitly solving the field equations to prove that the only purely electric, shear-free, geodesic but rotating perfect fluid is the Godel (1949) solution. The irrotational fluids with shear are then studied and solutions due to Szafron (1977) and Allnutt (1982) are characterised. The metric is simplified in several cases where new solutions may be found. The geodesic space-times in this class and all Bianchi type 1 perfect fluid metrics are shown to have a metric expressible in a diagonal form. The position of spherically symmetric and Bianchi type 1 space-times in relation to the general case is also illustrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the transfer of a message between two cultures very frequently takes place through the medium of a written text qua communicative event, it would seem useful to attempt to ascertain whether there is any kind of pattern in the use of strategies for the effective interlingual transfer of this message. Awareness of potentially successful strategies, within the constraints of context, text type, intended TL function and TL reader profile will enhance quality and cost-effectiveness (time, effort, financial costs) in the production of the target text. Through contrastive analysis of pairs of advertising texts, SL and TL, French and English, this study will attempt to identify the nature of some recurring choices made by different translators in the attempt to recreate ST information in the TL in such a manner as to reproduce as closely as possible the informative, persuasive and affective functions of the text as advertising material. Whilst recurrence may be seen to be significant in terms of illustrating tendencies with regard to the solution of problems of translation, this would not necessarily be taken as confirmation of the existence of pre-determined or prescriptive rules. These tendencies could, however, be taken as a guide to potential solutions to certain kinds of context-bound and text-type specific problem. Analysis of translated text-pairs taken from the field of advertising should produce examples of constraints posed by the need to select the content, tone and form of the Target Text, in order to ensure maximum efficacy of persuasive effect and to ensure the desired outcome, as determined by the Source Text function. When evaluating the success of a translated advertising text, constraints could be defined in terms of the culture-specific references or assumptions on which a Source Text may build in order to achieve its intended communicative function within the target community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Static mechanical properties of 2124 Al/SiCp MMC have been measured as a function of solution temperature and time. An optimum solution treatment has been established which produces significant improvements in static mechanical properties and fatigue crack growth resistance over conventional solution treatments. Increasing the solution treatment parameters up to the optimum values improves the mechanical properties because of intermetallic dissolution, improved solute and GPB zone strengthening and increased matrix dislocation density. Increasing the solution treatment parameters beyond the optimum values results in a rapid reduction in mechanical properties due to the formation of gas porosity and surface blisters. The optimum solution treatment improves tensile properties in the transverse orientation to a greater extent than in the longitudinal orientation and this results in reduced anisotropy. © 1996 Elsevier Science Limited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics Subject Classification 2010: 35M10, 35R11, 26A33, 33C05, 33E12, 33C20.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 47H04, 65K10.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Георги Венков, Христо Генев - Разглеждаме един клас от L^2 - критични нелинейни уравнения на Шрьодингер в R^(1+n) с конволюционна нелинейност от тип Хартри. Целта ни е да установим локалното и глобално съществуване на решенията, както и коректност на задачата на Коши в достатъчно малка околност на нулата в пространството L^2 (R^n). Като естествено следствие на глобалните резултати ние доказваме съществуване на оператор на разсейване за малки начални условия.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 35J65, 35K60, 35B05, 35R05.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 49L20, 60J60, 93E20

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cikkünk arról a paradox jelenségről szól, hogy a fogyasztást explicit módon megjelenítő Neumann-modell egyensúlyi megoldásaiban a munkabért meghatározó létszükségleti termékek ára esetenként nulla lehet, és emiatt a reálbér egyensúlyi értéke is nulla lesz. Ez a jelenség mindig bekövetkezik az olyan dekomponálható gazdaságok esetén, amelyekben eltérő növekedési és profitrátájú, alternatív egyensúlyi megoldások léteznek. A jelenség sokkal áttekinthetőbb formában tárgyalható a modell Leontief-eljárásra épülő egyszerűbb változatában is, amit ki is használunk. Megmutatjuk, hogy a legnagyobbnál alacsonyabb szintű növekedési tényezőjű megoldások közgazdasági szempontból értelmetlenek, és így érdektelenek. Ezzel voltaképpen egyrészt azt mutatjuk meg, hogy Neumann kiváló intuíciója jól működött, amikor ragaszkodott modellje egyértelmű megoldásához, másrészt pedig azt is, hogy ehhez nincs szükség a gazdaság dekomponálhatóságának feltételezésére. A vizsgált téma szorosan kapcsolódik az általános profitráta meghatározásának - Sraffa által modern formába öntött - Ricardo-féle elemzéséhez, illetve a neoklasszikus növekedéselmélet nevezetes bér-profit, illetve felhalmozás-fogyasztás átváltási határgörbéihez, ami jelzi a téma elméleti és elmélettörténeti érdekességét is. / === / In the Marx-Neumann version of the Neumann model introduced by Morishima, the use of commodities is split between production and consumption, and wages are determined as the cost of necessary consumption. In such a version it may occur that the equilibrium prices of all goods necessary for consumption are zero, so that the equilibrium wage rate becomes zero too. In fact such a paradoxical case will always arise when the economy is decomposable and the equilibrium not unique in terms of growth and interest rate. It can be shown that a zero equilibrium wage rate will appear in all equilibrium solutions where growth and interest rate are less than maximal. This is another proof of Neumann's genius and intuition, for he arrived at the uniqueness of equilibrium via an assumption that implied that the economy was indecomposable, a condition relaxed later by Kemeny, Morgenstern and Thompson. This situation occurs also in similar models based on Leontief technology and such versions of the Marx-Neumann model make the roots of the problem more apparent. Analysis of them also yields an interesting corollary to Ricardo's corn rate of profit: the real cause of the awkwardness is bad specification of the model: luxury commodities are introduced without there being a final demand for them, and production of them becomes a waste of resources. Bad model specification shows up as a consumption coefficient incompatible with the given technology in the more general model with joint production and technological choice. For the paradoxical situation implies the level of consumption could be raised and/or the intensity of labour diminished without lowering the equilibrium rate of the growth and interest. This entails wasteful use of resources and indicates again that the equilibrium conditions are improperly specified. It is shown that the conditions for equilibrium can and should be redefined for the Marx-Neumann model without assuming an indecomposable economy, in a way that ensures the existence of an equilibrium unique in terms of the growth and interest rate coupled with a positive value for the wage rate, so confirming Neumann's intuition. The proposed solution relates closely to findings of Bromek in a paper correcting Morishima's generalization of wage/profit and consumption/investment frontiers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bio-molecular interactions exist ubiquitously in all biological systems. This dissertation project was to construct a powerful surface plasmon resonance (SPR) sensor. The SPR system is used to study bio-molecular interactions in real time and without labeling. Surface plasmon is the oscillation of free electrons in metals coupled with surface electromagnetic waves. These surface electromagnetic waves provide a sensitive probe to study bio-molecular interactions on metal surfaces. This project resulted in the successful construction and optimization of a homemade SPR sensor and the development of several new powerful protocols to study bio-molecular interactions. It was discovered through this project that the limitations of earlier SPR sensors are related not only to the instrumentation design and operating procedures, but also to the complex behaviors of bio-molecules on sensor surfaces that were very different from that in solution. Based on these discoveries the instrumentation design and operating procedures were fully optimized. A set of existing sensor surface treatment protocols were tested and evaluated and new protocols were developed in this project. The new protocols have demonstrated excellent performance to study biomolecular interactions. The optimized home-made SPR sensor was used to study protein-surface interactions. These protein-surface interactions are responsible for many complex organic cell activities. The co-existence of different driving forces and their correlation with the structure of the protein and the surface make the understanding of the fundamental mechanism of protein-surface interactions a very challenging task. Using the improved SPR sensor, the electrostatic interaction and hydrophobic interaction were studied separately. The results of this project directly confirmed the theoretical predictions for electrostatic force between the protein and surface. In addition, this project demonstrated that the strength of the protein-surface hydrophobic interaction does not solely depend on the hydrophobicity as reported earlier. Surface structure also plays a significant role.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Until recently the use of biometrics was restricted to high-security environments and criminal identification applications, for economic and technological reasons. However, in recent years, biometric authentication has become part of daily lives of people. The large scale use of biometrics has shown that users within the system may have different degrees of accuracy. Some people may have trouble authenticating, while others may be particularly vulnerable to imitation. Recent studies have investigated and identified these types of users, giving them the names of animals: Sheep, Goats, Lambs, Wolves, Doves, Chameleons, Worms and Phantoms. The aim of this study is to evaluate the existence of these users types in a database of fingerprints and propose a new way of investigating them, based on the performance of verification between subjects samples. Once introduced some basic concepts in biometrics and fingerprint, we present the biometric menagerie and how to evaluate them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate age models are a tool of utmost important in paleoclimatology. Constraining the rate and pace of past climate change are at the core of paleoclimate research, as such knowledge is crucial to our understanding of the climate system. Indeed, it allows for the disentanglement of the various drivers of climate change. The scarcity of highly resolved sedimentary records from the middle Eocene (Bartonian - Lutetian Stages; 47.8 - 37.8 Ma) has led to the existence of the "Eocene astronomical time scale gap" and hindered the establishment of a comprehensive astronomical time scale (ATS) for the entire Cenozoic. Sediments from the Newfoundland Ridge drilled during Integrated Ocean Drilling Program (IODP) Expedition 342 span the Eocene gap at an unprecedented stratigraphic resolution with carbonate bearing sediments. Moreover, these sediments exhibit cyclic lithological changes that allow for an astronomical calibration of geologic time. In this study, we use the dominant obliquity imprint in XRF-derived calcium-iron ratio series (Ca/Fe) from three sites drilled during IODP Expedition 342 (U1408, U1409, U1410) to construct a floating astrochronology. We then anchor this chronology to numerical geological time by tuning 173-kyr cycles in the amplitude modulation pattern of obliquity to an astronomical solution. This study is one of the first to use the 173-kyr obliquity amplitude cycle for astrochronologic purposes, as previous studies primarily use the 405-kyr long eccentricity cycle as a tuning target to calibrate the Paleogene geologic time scale. We demonstrate that the 173-kyr cycles in obliquity's amplitude are stable between 40 and 50 Ma, which means that one can use the 173-kyr cycle for astrochronologic calibration in the Eocene. Our tuning provides new age estimates for magnetochron reversals C18n.1n - C21r and a stratigraphic framework for key sites from Expedition 342 for the Eocene. Some disagreements emerge when we compare our tuning for the interval between C19r and C20r with previous tuning attempts from the South Atlantic. We therefore present a revision of the original astronomical interpretations for the latter records, so that the various astrochronologic age models for the middle Eocene in the North- and South-Atlantic are consistent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buildings and the whole built environment are in a key role when societies are mitigating climate change and adapting to its consequences. More than 50% of the existing residential buildings in EU-25 were built before 1970. Thus, these buildings are of significant importance in reducing energy consumption and CO2 emissions. The existence of more nearly zero energy buildings (nZEB) is a possible solution for this problem. This study aims to analyze the application of the nZEB methodology in the retrofitting of a typical Portuguese dwelling build in 1950. It was shown that the primary energy used can be reduced to a very low value (11,95 kWhep/m2.y) in comparison with the reference consumption (69,15 kWhep/m2.y), with the application of the best construction techniques together with the use of energy from on-site renewable sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the equations describing the dynamics of neural systems are written in terms of firing rate functions, which themselves are often taken to be threshold functions of synaptic activity. Dating back to work by Hill in 1936 it has been recognized that more realistic models of neural tissue can be obtained with the introduction of state-dependent dynamic thresholds. In this paper we treat a specific phenomenological model of threshold accommodation that mimics many of the properties originally described by Hill. Importantly we explore the consequences of this dynamic threshold at the tissue level, by modifying a standard neural field model of Wilson-Cowan type. As in the case without threshold accommodation classical Mexican-Hat connectivity is shown to allow for the existence of spatially localized states (bumps) in both one and two dimensions. Importantly an analysis of bump stability in one dimension, using recent Evans function techniques, shows that bumps may undergo instabilities leading to the emergence of both breathers and traveling waves. Moreover, a similar analysis for traveling pulses leads to the conditions necessary to observe a stable traveling breather. In the regime where a bump solution does not exist direct numerical simulations show the possibility of self-replicating bumps via a form of bump splitting. Simulations in two space dimensions show analogous localized and traveling solutions to those seen in one dimension. Indeed dynamical behavior in this neural model appears reminiscent of that seen in other dissipative systems that support localized structures, and in particular those of coupled cubic complex Ginzburg-Landau equations. Further numerical explorations illustrate that the traveling pulses in this model exhibit particle like properties, similar to those of dispersive solitons observed in some three component reaction-diffusion systems. A preliminary account of this work first appeared in S Coombes and M R Owen, Bumps, breathers, and waves in a neural network with spike frequency adaptation, Physical Review Letters 94 (2005), 148102(1-4).