413 resultados para PROOFS
Resumo:
In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.
Resumo:
The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.
Resumo:
INTRODUCCION: El dolor torácico es una de las principales causas de consulta en los servicios de urgencias y cardiología, se convierte en un reto clasificar a los pacientes empleando una herramienta diagnóstica lo suficientemente sensible y especifica para establecer riesgo y pronóstico, la estrecha relación existente entre enfermedad aterosclerótica e inflamación ha dirigido su atención al papel de marcadores plasmáticos de inflamación como predictores de riesgo de eventos cardiovasculares. La Proteína C reactiva (PCR) ha sido ampliamente estudiada en pacientes con factores de riesgo cardiovascular y Eventos coronarios Agudos, pero se desconoce el comportamiento en pacientes con dolor torácico de probabilidad intermedia. OBJETIVOS: Determinar la utilidad y comportamiento de la Proteína C reactiva en pacientes con dolor torácico de probabilidad Intermedia para síndrome coronario. MATERIALES Y METODOS: Este estudio fue realizado entre junio 2008 y febrero de 2009 en una institución de referencia en cardiológica ( Fundación Cardio Infantil, Bogotá-Colombia), Se Estudiaron pacientes con EKG normal o no diagnostico y marcadores de injuria miocardica negativos. Los pacientes continuaron su estudio según las recomendaciones y guías internacionales para dolor torácico. Nosotros realizamos dos tomas de PCR, Una PCR antes de 12 horas de iniciado el dolor torácico y otra PCR después de las 18 Hrs de iniciado el dolor torácico, se realizo la deferencia entre estas dos PCR (PCR 18 hrs vs PCR basal) Con estos 3 resultados se hizo el análisis estadístico para hallar sensibilidad, especificidad, valor predictivo positivo, valor predictivo negativo, comparándolo contra las pruebas de provocación de isquemia y cateterismo. RESULTADOS: Un total de 203 pacientes fueron analizaron. Con un promedio de edad fue de 60.8 ± 11 años, Los dos géneros tuvieron una distribución sin diferencia significativas. Los factores de riesgo asociados fueron: Hipertensión arterial 76%(n=155), Dislipidemia 68.1%(n=139), Diabetes Mellitus 20.6%(n=42), Obesidad 7.4%(n=15) y tabaquismo 9.3%(n=19). El total de cateterismos realizados fueron 66 pruebas: Normal el 27%(n=18), lesiones no significativas el 25.8%(n=17) y lesiones Obstructivas 47%(n=31). La PCR tuvo una utilidad diagnostica baja, la PCR a las 18 horas es la mejor prueba diagnóstica , con un mejor comportamiento del área de la curva ROC 0.74 (IC , 0.64-0.83), con sensibilidad del 16.13% (IC 95%, 1.57-30.69), especificidad del 98.26%( IC 955, 96.01-100), un valor predictivo negativo de 86.67%(IC 95%, 81.64-91.69). En el seguimiento a los 30 días no encontró nuevas hospitalizaciones de causa cardiovascular. CONCLUSIONES: Nuestro estudio muestra una utilidad diagnostico baja de la PCR en el dolor torácico de probabilidad intermedia para enfermedad coronaria, el mejor comportamiento diagnostico se encontró en la PCR a las 18 hrs con una alta especificidad y un alto Valor predictivo negativo para un valor de PCR > de 3mg/dl, siendo menor la utilidad de la PCR basal y diferencia de la PCR. diferencia de la PCR. Estos hallazgos no se correlacionaron con estudios previos. No se pudo establecer un punto de Corte de la PCR diferente a los ya existentes debido a la variabilidad de la PCR entre la población de estudio. Las limitaciones encontradas en nuestro estudio hacen necesaria la realización de un estudio multicéntrico.
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
We use the point-source method (PSM) to reconstruct a scattered field from its associated far field pattern. The reconstruction scheme is described and numerical results are presented for three-dimensional acoustic and electromagnetic scattering problems. We give new proofs of the algorithms, based on the Green and Stratton-Chu formulae, which are more general than with the former use of the reciprocity relation. This allows us to handle the case of limited aperture data and arbitrary incident fields. Both for 3D acoustics and electromagnetics, numerical reconstructions of the field for different settings and with noisy data are shown. For shape reconstruction in acoustics, we develop an appropriate strategy to identify areas with good reconstruction quality and combine different such regions into one joint function. Then, we show how shapes of unknown sound-soft scatterers are found as level curves of the total reconstructed field.
Resumo:
We introduce transreal analysis as a generalisation of real analysis. We find that the generalisation of the real exponential and logarithmic functions is well defined for all transreal numbers. Hence, we derive well defined values of all transreal powers of all non-negative transreal numbers. In particular, we find a well defined value for zero to the power of zero. We also note that the computation of products via the transreal logarithm is identical to the transreal product, as expected. We then generalise all of the common, real, trigonometric functions to transreal functions and show that transreal (sin x)/x is well defined everywhere. This raises the possibility that transreal analysis is total, in other words, that every function and every limit is everywhere well defined. If so, transreal analysis should be an adequate mathematical basis for analysing the perspex machine - a theoretical, super-Turing machine that operates on a total geometry. We go on to dispel all of the standard counter "proofs" that purport to show that division by zero is impossible. This is done simply by carrying the proof through in transreal arithmetic or transreal analysis. We find that either the supposed counter proof has no content or else that it supports the contention that division by zero is possible. The supposed counter proofs rely on extending the standard systems in arbitrary and inconsistent ways and then showing, tautologously, that the chosen extensions are not consistent. This shows only that the chosen extensions are inconsistent and does not bear on the question of whether division by zero is logically possible. By contrast, transreal arithmetic is total and consistent so it defeats any possible "straw man" argument. Finally, we show how to arrange that a function has finite or else unmeasurable (nullity) values, but no infinite values. This arithmetical arrangement might prove useful in mathematical physics because it outlaws naked singularities in all equations.
Resumo:
This paper offers general guidelines for the development of effective visual languages. That is, languages for constructing diagrams that can be easily and readily interpreted and manipulated by the human reader. We use these guidelines first to examine classical AND/OR trees as a representation of logical proofs, and second to design and evaluate a visual language for representing proofs in LofA: a Logic of Dependability Arguments, for which we provide a brief motivation and overview.
Resumo:
In this article Geoff Tennant summarises the first half of Imre Lakatos's seminal 1976 book, "Proofs and refutations: the logic of mathematical discovery". Implications are drawn for the classroom treatment of proof.
Resumo:
We consider the time-harmonic Maxwell equations with constant coefficients in a bounded, uniformly star-shaped polyhedron. We prove wavenumber-explicit norm bounds for weak solutions. This result is pivotal for convergence proofs in numerical analysis and may be a tool in the analysis of electromagnetic boundary integral operators.
Resumo:
Document design and typeface design: A typographic specification for a new Intermediate Greek-English Lexicon by CUP, accompanied by typefaces modified for the specific typographic requirements of the text. The Lexicon is a substantial (over 1400 pages) publication for HE students and academics intended to complement Liddell-Scott (the standard reference for classical Greek since the 1850s), and has been in preparation for over a decade. The typographic appearance of such works has changed very little since the original editions, largely to the lack of suitable typefaces: early digital proofs of the Lexicon utilised directly digitised versions of historical typefaces, making the entries difficult to navigate, and the document uneven in typographic texture. Close collaboration with the editors of the Lexicon, and discussion of the historical precedents for such documents informed the design at all typographic levels to achieve a highly reader-friendly results that propose a model for this kind of typography. Uniquely for a work of this kind, typeface design decisions were integrated into the wider document design specification. A rethinking of the complex typography for Greek and English based on historical editions as well as equivalent bilingual reference works at this level (from OUP, CUP, Brill, Mondadori, and other publishers) led a redefinition of multi-script typeface pairing for the specific context, taking into account recent developments in typeface design. Specifically, the relevant weighting of elements within each entry were redefined, as well as the typographic texture of type styles across the two scripts. In details, Greek typefaces were modified to emphasise clarity and readability, particularly of diacritics, at very small sizes. The relative weights of typefaces typeset side-by-side were fine-tuned so that the visual hierarchy of the entires was unambiguous despite the dense typesetting.
Resumo:
The fully compressible semi-geostrophic system is widely used in the modelling of large-scale atmospheric flows. In this paper, we prove rigorously the existence of weak Lagrangian solutions of this system, formulated in the original physical coordinates. In addition, we provide an alternative proof of the earlier result on the existence of weak solutions of this system expressed in the so-called geostrophic, or dual, coordinates. The proofs are based on the optimal transport formulation of the problem and on recent general results concerning transport problems posed in the Wasserstein space of probability measures.
Resumo:
In England, appraisals of the financial viability of development schemes have become an integral part of planning policy-making, initially in determining the amount of planning obligations that might be obtained via legal agreements (known as Section 106 agreements) and latterly as a basis for establishing charging schedules for the Community Infrastructure Levy (CIL). Local planning authorities set these policies on an area-wide basis but ultimately development proposals require consent on a site-by-site basis. It is at this site-specific level that issues of viability are hotly contested. This paper examines case documents, proofs of evidence and decisions from a sample of planning disputes in order to address major issues within development viability, the application of the models and the distribution of the development gain between the developer, landowner and community. The results have specific application to viability assessment in England and should impact on future policy and practice guidance in this field. They also have relevance to other countries that incorporate assessments of economic viability in their planning systems.
Resumo:
Today we are faced with the problem of how to relate the archives of deconstructionist thinkers to their thought, which seems opposed to historically-oriented or genetic criticism. This article is the first to look comprehensively at the gestation of Maurice Blanchot's L'Entretien infini, including the work's page-proofs which were made available in 2009. I thus address the multiple processes of change at work throughout Blanchot's writing, with change being a process whose importance goes beyond any single new form of writing resulting from it. My presentation of the archival material is contextualized via a discussion of the notion of L'Absence de livre (the project's working title). Rather than being a straightfoward incompleteness or fragmentation, this notion establishes a fraught relationship between such ideas and what it calls ‘le Livre’.
Resumo:
Transreal numbers provide a total semantics containing classical truth values, dialetheaic, fuzzy and gap values. A paraconsistent Sheffer Stroke generalises all classical logics to a paraconsistent form. We introduce logical spaces of all possible worlds and all propositions. We operate on a proposition, in all possible worlds, at the same time. We define logical transformations, possibility and necessity relations, in proposition space, and give a criterion to determine whether a proposition is classical. We show that proofs, based on the conditional, infer gaps only from gaps and that negative and positive infinity operate as bottom and top values.