66 resultados para neutral theory
Resumo:
As a discipline, logic is arguably constituted of two main sub-projects: formal theories of argument validity on the basis of a small number of patterns, and theories of how to reduce the multiplicity of arguments in non-logical, informal contexts to the small number of patterns whose validity is systematically studied (i.e. theories of formalization). Regrettably, we now tend to view logic 'proper' exclusively as what falls under the first sub-project, to the neglect of the second, equally important sub-project. In this paper, I discuss two historical theories of argument formalization: Aristotle's syllogistic theory as presented in the "Prior Analytics", and medieval theories of supposition. They both illustrate this two-fold nature of logic, containing in particular illuminating reflections on how to formalize arguments (i.e. the second sub-project). In both cases, the formal methods employed differ from the usual modern technique of translating an argument in ordinary language into a specially designed symbolism, a formal language. The upshot is thus a plea for a broader conceptualization of what it means to formalize.
Resumo:
This paper is a historical companion to a previous one, in which it was studied the so-called abstract Galois theory as formulated by the Portuguese mathematician José Sebastião e Silva (see da Costa, Rodrigues (2007)). Our purpose is to present some applications of abstract Galois theory to higher-order model theory, to discuss Silva's notion of expressibility and to outline a classical Galois theory that can be obtained inside the two versions of the abstract theory, those of Mark Krasner and of Silva. Some comments are made on the universal theory of (set-theoretic) structures.
Resumo:
ABSTRACT When Hume, in the Treatise on Human Nature, began his examination of the relation of cause and effect, in particular, of the idea of necessary connection which is its essential constituent, he identified two preliminary questions that should guide his research: (1) For what reason we pronounce it necessary that every thing whose existence has a beginning should also have a cause and (2) Why we conclude that such particular causes must necessarily have such particular effects? (1.3.2, 14-15) Hume observes that our belief in these principles can result neither from an intuitive grasp of their truth nor from a reasoning that could establish them by demonstrative means. In particular, with respect to the first, Hume examines and rejects some arguments with which Locke, Hobbes and Clarke tried to demonstrate it, and suggests, by exclusion, that the belief that we place on it can only come from experience. Somewhat surprisingly, however, Hume does not proceed to show how that derivation of experience could be made, but proposes instead to move directly to an examination of the second principle, saying that, "perhaps, be found in the end, that the same answer will serve for both questions" (1.3.3, 9). Hume's answer to the second question is well known, but the first question is never answered in the rest of the Treatise, and it is even doubtful that it could be, which would explain why Hume has simply chosen to remove any mention of it when he recompiled his theses on causation in the Enquiry concerning Human Understanding. Given this situation, an interesting question that naturally arises is to investigate the relations of logical or conceptual implication between these two principles. Hume seems to have thought that an answer to (2) would also be sufficient to provide an answer to (1). Henry Allison, in his turn, argued (in Custom and Reason in Hume, p. 94-97) that the two questions are logically independent. My proposal here is to try to show that there is indeed a logical dependency between them, but the implication is, rather, from (1) to (2). If accepted, this result may be particularly interesting for an interpretation of the scope of the so-called "Kant's reply to Hume" in the Second Analogy of Experience, which is structured as a proof of the a priori character of (1), but whose implications for (2) remain controversial.
Resumo:
In this article I intend to show that certain aspects of A.N. Whitehead's philosophy of organism and especially his epochal theory of time, as mainly exposed in his well-known work Process and Reality, can serve in clarify the underlying assumptions that shape nonstandard mathematical theories as such and also as metatheories of quantum mechanics. Concerning the latter issue, I point to an already significant research on nonstandard versions of quantum mechanics; two of these approaches are chosen to be critically presented in relation to the scope of this work. The main point of the paper is that, insofar as we can refer a nonstandard mathematical entity to a kind of axiomatical formalization essentially 'codifying' an underlying mental process indescribable as such by analytic means, we can possibly apply certain principles of Whitehead's metaphysical scheme focused on the key notion of process which is generally conceived as the becoming of actual entities. This is done in the sense of a unifying approach to provide an interpretation of nonstandard mathematical theories as such and also, in their metatheoretical status, as a formalization of the empirical-experimental context of quantum mechanics.
Resumo:
The application of the Extreme Value Theory (EVT) to model the probability of occurrence of extreme low Standardized Precipitation Index (SPI) values leads to an increase of the knowledge related to the occurrence of extreme dry months. This sort of analysis can be carried out by means of two approaches: the block maxima (BM; associated with the General Extreme Value distribution) and the peaks-over-threshold (POT; associated with the Generalized Pareto distribution). Each of these procedures has its own advantages and drawbacks. Thus, the main goal of this study is to compare the performance of BM and POT in characterizing the probability of occurrence of extreme dry SPI values obtained from the weather station of Ribeirão Preto-SP (1937-2012). According to the goodness-of-fit tests, both BM and POT can be used to assess the probability of occurrence of the aforementioned extreme dry SPI monthly values. However, the scalar measures of accuracy and the return level plots indicate that POT provides the best fit distribution. The study also indicated that the uncertainties in the parameters estimates of a probabilistic model should be taken into account when the probability associated with a severe/extreme dry event is under analysis.
Resumo:
In this paper, a systematic and quantitative view is presented for the application of the theory of constraints in manufacturing. This is done employing the operational research technique of mathematical programming. The potential of the theory of constraints in automated manufacturing is demonstrated.
Resumo:
In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.
Resumo:
Glyphosate is an herbicide that inhibits the enzyme 5-enolpyruvyl-shikimate-3-phosphate synthase (EPSPs) (EC 2.5.1.19). EPSPs is the sixth enzyme of the shikimate pathway, by which plants synthesize the aromatic amino acids phenylalanine, tyrosine, and tryptophan and many compounds used in secondary metabolism pathways. About fifteen years ago it was hypothesized that it was unlikely weeds would evolve resistance to this herbicide because of the limited degree of glyphosate metabolism observed in plants, the low resistance level attained to EPSPs gene overexpression, and because of the lower fitness in plants with an altered EPSPs enzyme. However, today 20 weed species have been described with glyphosate resistant biotypes that are found in all five continents of the world and exploit several different resistant mechanisms. The survival and adaptation of these glyphosate resistant weeds are related toresistance mechanisms that occur in plants selected through the intense selection pressure from repeated and exclusive use of glyphosate as the only control measure. In this paper the physiological, biochemical, and genetic basis of glyphosate resistance mechanisms in weed species are reviewed and a novel and innovative theory that integrates all the mechanisms of non-target site glyphosate resistance in plants is presented.
Resumo:
Organismic-centered Darwinism, in order to use direct phenotypes to measure natural selection's effect, necessitates genome's harmony and uniform coherence plus large population sizes. However, modern gene-centered Darwinism has found new interpretations to data that speak of genomic incoherence and disharmony. As a result of these two conflicting positions a conceptual crisis in Biology has arisen. My position is that the presence of small, even pocket-size, demes is instrumental in generating divergence and phenotypic crisis. Moreover, the presence of parasitic genomes as in acanthocephalan worms, which even manipulate suicidal behavior in their hosts; segregation distorters that change meiosis and Mendelian ratios; selfish genes and selfish whole chromosomes, such as the case of B-chromosomes in grasshoppers; P-elements in Drosophila; driving Y-chromosomes that manipulate sex ratios making males more frequent, as in Hamilton's X-linked drive; male strategists and outlaw genes, are eloquent examples of the presence of real conflicting genomes and of a non-uniform phenotypic coherence and genome harmony. Thus, we are proposing that overall incoherence and disharmony generate disorder but also more biodiversity and creativeness. Finally, if genes can manipulate natural selection, they can multiply mutations or undesirable characteristics and even lethal or detrimental ones, hence the accumulation of genetic loads. Outlaw genes can change what is adaptively convenient even in the direction of the trait that is away from the optimum. The optimum can be "negotiated" among the variants, not only because pleiotropic effects demand it, but also, in some cases, because selfish, outlaw, P-elements or extended phenotypic manipulation require it. With organismic Darwinism the genome in the population and in the individual was thought to act harmoniously without conflicts, and genotypes were thought to march towards greater adaptability. Modern Darwinism has a gene-centered vision in which genes, as natural selection's objects can move in dissonance in the direction which benefits their multiplication. Thus, we have greater opportunities for genomes in permanent conflict.
Resumo:
Two intramolecularly quenched fluorogenic peptides containing o-aminobenzoyl (Abz) and ethylenediamine 2,4-dinitrophenyl (EDDnp) groups at amino- and carboxyl-terminal amino acid residues, Abz-DArg-Arg-Leu-EDDnp (Abz-DRRL-EDDnp) and Abz-DArg-Arg-Phe-EDDnp (Abz-DRRF-EDDnp), were selectively hydrolyzed by neutral endopeptidase (NEP, enkephalinase, neprilysin, EC 3.4.24.11) at the Arg-Leu and Arg-Phe bonds, respectively. The kinetic parameters for the NEP-catalyzed hydrolysis of Abz-DRRL-EDDnp and Abz-DRRF-EDDnp were Km = 2.8 µM, kcat = 5.3 min-1, kcat/Km = 2 min-1 µM-1 and Km = 5.0 µM, kcat = 7.0 min-1, kcat/Km = 1.4 min-1 µM-1, respectively. The high specificity of these substrates was demonstrated by their resistance to hydrolysis by metalloproteases [thermolysin (EC 3.4.24.2), angiotensin-converting enzyme (ACE; EC 3.4.24.15)], serineproteases [trypsin (EC 3.4.21.4), a-chymotrypsin (EC 3.4.21.1)] and proteases present in tissue homogenates from kidney, lung, brain and testis. The blocked amino- and carboxyl-terminal amino acids protected these substrates against the action of aminopeptidases, carboxypeptidases and ACE. Furthermore, DR amino acids ensured total protection of Abz-DRRL-EDDnp and Abz-DRRF-EDDnp against the action of thermolysin and trypsin. Leu-EDDnp and Phe-EDDnp were resistant to hydrolysis by a-chymotrypsin. The high specifity of these substrates suggests their use for specific NEP assays in crude enzyme preparations
Resumo:
Saccharomyces cerevisiae neutral trehalase (encoded by NTH1) is regulated by cAMP-dependent protein kinase (PKA) and by an endogenous modulator protein. A yeast strain with knockouts of CMK1 and CMK2 genes (cmk1cmk2) and its isogenic control (CMK1CMK2) were used to investigate the role of CaM kinase II in the in vitro activation of neutral trehalase during growth on glucose. In the exponential growth phase, cmk1cmk2 cells exhibited basal trehalase activity and an activation ratio by PKA very similar to that found in CMK1CMK2 cells. At diauxie, even though both cells presented comparable basal trehalase activities, cmk1cmk2 cells showed reduced activation by PKA and lower total trehalase activity when compared to CMK1CMK2 cells. To determine if CaM kinase II regulates NTH1 expression or is involved in post-translational modulation of neutral trehalase activity, NTH1 promoter activity was evaluated using an NTH1-lacZ reporter gene. Similar ß-galactosidase activities were found for CMK1CMK2 and cmk1cmk2 cells, ruling out the role of CaM kinase II in NTH1 expression. Thus, CaM kinase II should act in concert with PKA on the activation of the cryptic form of neutral trehalase. A model for trehalase regulation by CaM kinase II is proposed whereby the target protein for Ca2+/CaM-dependent kinase II phosphorylation is not the neutral trehalase itself. The possible identity of this target protein with the recently identified trehalase-associated protein YLR270Wp is discussed.
Resumo:
The present study compares the performance of stochastic and fuzzy models for the analysis of the relationship between clinical signs and diagnosis. Data obtained for 153 children concerning diagnosis (pneumonia, other non-pneumonia diseases, absence of disease) and seven clinical signs were divided into two samples, one for analysis and other for validation. The former was used to derive relations by multi-discriminant analysis (MDA) and by fuzzy max-min compositions (fuzzy), and the latter was used to assess the predictions drawn from each type of relation. MDA and fuzzy were closely similar in terms of prediction, with correct allocation of 75.7 to 78.3% of patients in the validation sample, and displaying only a single instance of disagreement: a patient with low level of toxemia was mistaken as not diseased by MDA and correctly taken as somehow ill by fuzzy. Concerning relations, each method provided different information, each revealing different aspects of the relations between clinical signs and diagnoses. Both methods agreed on pointing X-ray, dyspnea, and auscultation as better related with pneumonia, but only fuzzy was able to detect relations of heart rate, body temperature, toxemia and respiratory rate with pneumonia. Moreover, only fuzzy was able to detect a relationship between heart rate and absence of disease, which allowed the detection of six malnourished children whose diagnoses as healthy are, indeed, disputable. The conclusion is that even though fuzzy sets theory might not improve prediction, it certainly does enhance clinical knowledge since it detects relationships not visible to stochastic models.
Resumo:
Coronary artery disease (CAD) is a worldwide leading cause of death. The standard method for evaluating critical partial occlusions is coronary arteriography, a catheterization technique which is invasive, time consuming, and costly. There are noninvasive approaches for the early detection of CAD. The basis for the noninvasive diagnosis of CAD has been laid in a sequential analysis of the risk factors, and the results of the treadmill test and myocardial perfusion scintigraphy (MPS). Many investigators have demonstrated that the diagnostic applications of MPS are appropriate for patients who have an intermediate likelihood of disease. Although this information is useful, it is only partially utilized in clinical practice due to the difficulty to properly classify the patients. Since the seminal work of Lotfi Zadeh, fuzzy logic has been applied in numerous areas. In the present study, we proposed and tested a model to select patients for MPS based on fuzzy sets theory. A group of 1053 patients was used to develop the model and another group of 1045 patients was used to test it. Receiver operating characteristic curves were used to compare the performance of the fuzzy model against expert physician opinions, and showed that the performance of the fuzzy model was equal or superior to that of the physicians. Therefore, we conclude that the fuzzy model could be a useful tool to assist the general practitioner in the selection of patients for MPS.
Resumo:
Introduction: Continuous exposition of the peritoneal membrane to conventional dialysis solutions is an important risk factor for inducing structural and functional alterations. Objective: To compare in vitro mouse fibroblast NIH-3T3 cell viability after exposition to a neutral pH dialysis solution in comparison to cells exposed to a standard solution. Methods: Experimental study to compare the effects of a conventional standard or a neutral-pH, low-glucose degradation products peritoneal dialysis solution on the viability of exposed fibroblasts in cell culture. Both solutions were tested in all the commercially available glucose concentrations. Cell viability was evaluated with tetrazolium salt colorimetric assay. Results: Fibroblast viability was significantly superior in the neutral pH solution in comparison to control, in all three glucose concentrations (Optical density in nm-means ± SD: 1.5% 0.295 ± 0.047 vs. 0.372 ± 0.042, p < 0.001; 2.3% 0.270 ± 0.036 vs. 0.337 ± 0.051, p < 0.001; 4.25% 0.284 ± 0.037 vs. 0.332 ± 0.032, p < 0.001; control vs. neutral pH respectively, Student t Test). There was no significant difference in cell viability between the three concentrations of glucose when standard solution was used (ANOVA p = 0.218), although cell viability was higher after exposition to neutral pH peritoneal dialysis fluid at 1.5% in comparison to 2.3 and 4.25% glucose concentrations (ANOVA p = 0.008: Bonferroni 1.5% vs. 2.3% p = 0.033, 1.5% vs. 4.25% p = 0.014, 2.3% vs. 4.25% p = 1.00). Conclusion: Cell viability was better in neutral pH dialysis solution, especially in the lower glucose concentration. A more physiological pH and lower glucose degradation products may be responsible for such results.
Resumo:
Inflation targeting, Taylor rule and money neutrality: a post-Keynesian critic. This paper critically discusses the inflation targeting regime proposed by orthodox economists, in particular the Taylor Rule. The article describes how the Taylor Rule assumes the argument of money neutrality inherited from the Quantitative Theory of Money. It discusses critically the ways of operation of the rule, and the negative impacts of the interest rate over the potential output. In this sense, the article shows the possible vicious circles of the monetary policy when money is not neutral, as is the case for post-keynesian economists. The relation of interest rates, potential output and the output gap is illustrated in some estimates using the methodology of Vector Auto-Regressive in the Brazilian case.