993 resultados para Frankfurt-type examples
Resumo:
Les contre-exemples de Frankfurt sont inoffensifs contre l’argument de la conséquence (consequence argument), l’argument qui, à partir du principe des possibilités alternatives et du déterminisme, montre que nous ne pouvons être tenus moralement responsables de nos actions. En effet, ils sont formulés soit dans un cadre déterministe, soit dans un cadre indéterministe. S’ils sont formulés dans un cadre indéterministe, ils sont inoffensifs parce qu’ils contreviennent à un principe méthodologique que nous défendons : le principe de non-négation des prémisses (PNNP). En fait, nous montrons que pour tout argument donné, il est proscrit de supposer la négation d’une prémisse afin de réfuter une autre prémisse à moins que l’attaque réussisse à réfuter les deux prémisses en question. Or, d’une part, les contre-exemples de Frankfurt indéterministes supposent explicitement qu’une prémisse de l’argument de la conséquence – le déterminisme est vrai – est fausse; et d’autre part, ils ne peuvent pas nous donner de raisons de croire en l’indéterminisme, ce que nous montrons grâce à des considérations sur la transmission de la justification. Construire des contre-exemples de Frankfurt indéterministes est donc incorrect pour des raisons méthodologiques et logiques. S’ils sont formulés dans un cadre déterministe, les contre-exemples de Frankfurt font face à une autre accusation d’entorse argumentative, présentée dans la défense du dilemme (Dilemma Defence) de Kane-Ginet-Widerker : celle de la pétition de principe. Nous inspectons et nuançons cette accusation, mais concluons qu’elle tient puisque les contre-exemples de Frankfurt déterministes supposent au final une analyse des agents contrefactuels dans les mondes déterministes et de la relation « rendre inévitable » que ne peuvent endosser ni les incompatibilistes de la marge de manœuvre (leeway incompatibilists), ni les incompatibilistes de la source (source incompatibilists) ni non plus les semicompatibilistes. Conséquemment, les contre-exemples de Frankfurt ne peuvent plus soutenir la forme de compatibilisme à laquelle ils ont donné naissance. L’incompatibilisme de la source ne peut plus être préféré à l’incompatibilisme de la marge de manœuvre ni non plus rejeter toute participation des possibilités alternatives dans l’explication de la responsabilité morale sur cette seule base.
Resumo:
The location of the seaward tip of a subduction thrust controls material transfer at convergent plate margins, and hence global mass balances. At approximately half of those margins, the material of the subducting plate is completely underthrust so that no accretion or even subduction erosion takes place. Along the remaining margins, material is scraped off the subducting plate and added to the upper plate by frontal accretion. We here examine the physical properties of subducting sediments off Costa Rica and Nankai, type examples for an erosional and an accretionary margin, to investigate which parameters control the level where the frontal thrust cuts into the incoming sediment pile. A series of rotary-shear experiments to measure the frictional strength of the various lithologies entering the two subduction zones were carried out. Results include the following findings: (1) At Costa Rica, clay-rich strata at the top of the incoming succession have the lowest strength (µres = 0.19) while underlying calcareous ooze, chalk and diatomite are strong (up to µres = 0.43; µpeak = 0.56). Hence the entire sediment package is underthrust. (2) Off Japan, clay-rich deposits within the lower Shikoku Basin inventory are weakest (µres = 0.13-0.19) and favour the frontal proto-thrust to migrate into one particular horizon between sandy, competent turbidites below and ash-bearing mud above. (3) Taking in situ data and earlier geotechnical testing into account, it is suggested that mineralogical composition rather than pore-pressure defines the position of the frontal thrust, which locates in the weakest, clay mineral-rich (up to 85 wt.%) materials. (4) Smectite, the dominant clay mineral phase at either margin, shows rate strengthening and stable sliding in the frontal 50 km of the subduction thrust (0.0001-0.1 mm/s, 0.5-25 MPa effective normal stress). (5) Progressive illitization of smectite cannot explain seismogenesis, because illite-rich samples also show velocity strengthening at the conditions tested.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The goals of this article are to (1) provide further validation of the Glycam06 force field, specifically for its use in implicit solvent molecular dynamic (MD) simulations, and (2) to present the extension of G.N. Ramachandran's idea of plotting amino acid phi and psi angles to the glycosidic phi, psi, and omega angles formed between carbohydrates. As in traditional Ramachandran plots, these carbohydrate Ramachandran-type (carb-Rama) plots reveal the coupling between the glycosidic angles by displaying the allowed and disallowed conformational space. Considering two-bond glycosidic linkages, there are 18 possible conformational regions that can be defined by (α, ϕ, ψ) and (β, ϕ, ψ), whereas for three-bond linkages, there are 54 possible regions that can be defined by (α, ϕ, ψ, ω) and (β, ϕ, ψ, ω). Illustrating these ideas are molecular dynamic simulations on an implicitly hydrated oligosaccharide (700 ns) and its eight constituent disaccharides (50 ns/disaccharide). For each linkage, we compare and contrast the oligosaccharide and respective disaccharide carb-Rama plots, validate the simulations and the Glycam06 force field through comparison to experimental data, and discuss the general trends observed in the plots.
Resumo:
A new mesh adaptivity algorithm that combines a posteriori error estimation with bubble-type local mesh generation (BLMG) strategy for elliptic differential equations is proposed. The size function used in the BLMG is defined on each vertex during the adaptive process based on the obtained error estimator. In order to avoid the excessive coarsening and refining in each iterative step, two factor thresholds are introduced in the size function. The advantages of the BLMG-based adaptive finite element method, compared with other known methods, are given as follows: the refining and coarsening are obtained fluently in the same framework; the local a posteriori error estimation is easy to implement through the adjacency list of the BLMG method; at all levels of refinement, the updated triangles remain very well shaped, even if the mesh size at any particular refinement level varies by several orders of magnitude. Several numerical examples with singularities for the elliptic problems, where the explicit error estimators are used, verify the efficiency of the algorithm. The analysis for the parameters introduced in the size function shows that the algorithm has good flexibility.
Resumo:
Valency Realization in Short Excerpts of News Text. A Pragmatics-funded analysis This dissertation is a study of the so-called pragmatic valency. The aim of the study is to examine the phenomenon both theoretically by discussing the research literature and empirically based on evidence from a text corpus consisting of 218 short excerpts of news text from the German newspaper Frankfurter Allgemeine Zeitung. In the theoretical part of the study, the central concepts of the valency and the pragmatic valency are discussed. In the research literature, the valency denotes the relation among the verb and its obligatory and optional complements. The pragmatic valency can be defined as modification of the so-called system valency in the parole, including non-realization of an obligatory complement, non- realization of an optional complement and realization of an optional complement. Furthermore, the investigation of the pragmatic valency includes the role of the adjuncts, elements that are not defined by the valency, in the concrete valency realization. The corpus study investigates the valency behaviour of German verbs in a corpus of about 1500 sentences combining the methodology and concepts of valency theory, semantics and text linguistics. The analysis is focused on the about 600 sentences which show deviations from the system valency, providing over 800 examples for the modification of the system valency as codified in the (valency) dictionaries. The study attempts to answer the following primary question: Why is the system valency modified in the parole? To answer the question, the concept of modification types is entered. The modification types are recognized using distinctive feature bundles in which each feature with a negative or a positive value refers to one reason for the modification treated in the research literature. For example, the features of irrelevance and relevance, focus, world and text type knowledge, text theme, theme-rheme structure and cohesive chains are applied. The valency approach appears in a new light when explored through corpus-based investigation; both the optionality of complements and the distinction between complements and adjuncts as defined in the present valency approach seem in some respects defective. Furthermore, the analysis indicates that the adjuncts outside the valency domain play a central role in the concrete realization of the valency. Finally, the study suggests a definition of pragmatic valency, based on the modification types introduced in the study and tested in the corpus analysis.
Resumo:
We propose a new type of high-order elements that incorporates the mesh-free Galerkin formulations into the framework of finite element method. Traditional polynomial interpolation is replaced by mesh-free interpolations in the present high-order elements, and the strain smoothing technique is used for integration of the governing equations based on smoothing cells. The properties of high-order elements, which are influenced by the basis function of mesh-free interpolations and boundary nodes, are discussed through numerical examples. It can be found that the basis function has significant influence on the computational accuracy and upper-lower bounds of energy norm, when the strain smoothing technique retains the softening phenomenon. This new type of high-order elements shows good performance when quadratic basis functions are used in the mesh-free interpolations and present elements prove advantageous in adaptive mesh and nodes refinement schemes. Furthermore, it shows less sensitive to the quality of element because it uses the mesh-free interpolations and obeys the Weakened Weak (W2) formulation as introduced in [3, 5].
Resumo:
An attempt is made to study the two dimensional (2D) effective electron mass (EEM) in quantum wells (Qws), inversion layers (ILs) and NIPI superlattices of Kane type semiconductors in the presence of strong external photoexcitation on the basis of a newly formulated electron dispersion laws within the framework of k.p. formalism. It has been found, taking InAs and InSb as examples, that the EEM in Qws, ILs and superlattices increases with increasing concentration, light intensity and wavelength of the incident light waves, respectively and the numerical magnitudes in each case is band structure dependent. The EEM in ILs is quantum number dependent exhibiting quantum jumps for specified values of the surface electric field and in NIPI superlattices; the same is the function of Fermi energy and the subband index characterizing such 2D structures. The appearance of the humps of the respective curves is due to the redistribution of the electrons among the quantized energy levels when the quantum numbers corresponding to the highest occupied level changes from one fixed value to the others. Although the EEM varies in various manners with all the variables as evident from all the curves, the rates of variations totally depend on the specific dispersion relation of the particular 2D structure. Under certain limiting conditions, all the results as derived in this paper get transformed into well known formulas of the EEM and the electron statistics in the absence of external photo-excitation and thus confirming the compatibility test. The results of this paper find three applications in the field of microstructures. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper deals with an optimization based method for synthesis of adjustable planar four-bar, crank-rocker mechanisms. For multiple different and desired paths to be traced by a point on the coupler, a two stage method first determines the parameters of the possible driving dyads. Then the remaining mechanism parameters are determined in the second stage where a least-squares based circle-fitting procedure is used. Compared to existing formulations, the optimization method uses less number of design variables. Two numerical examples demonstrate the effectiveness of the proposed synthesis method. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
A nonlinear stochastic filtering scheme based on a Gaussian sum representation of the filtering density and an annealing-type iterative update, which is additive and uses an artificial diffusion parameter, is proposed. The additive nature of the update relieves the problem of weight collapse often encountered with filters employing weighted particle based empirical approximation to the filtering density. The proposed Monte Carlo filter bank conforms in structure to the parent nonlinear filtering (Kushner-Stratonovich) equation and possesses excellent mixing properties enabling adequate exploration of the phase space of the state vector. The performance of the filter bank, presently assessed against a few carefully chosen numerical examples, provide ample evidence of its remarkable performance in terms of filter convergence and estimation accuracy vis-a-vis most other competing filters especially in higher dimensional dynamic system identification problems including cases that may demand estimating relatively minor variations in the parameter values from their reference states. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
In the present paper, by use of the boundary integral equation method and the techniques of Green fundamental solution and singularity analysis, the dynamic infinite plane crack problem is investigated. For the first time, the problem is reduced to solving a system of mixed-typed integral equations in Laplace transform domain. The equations consist of ordinary boundary integral equations along the outer boundary and Cauchy singular integral equations along the crack line. The equations obtained are strictly proved to be equivalent with the dual integral equations obtained by Sih in the special case of dynamic Griffith crack problem. The mixed-type integral equations can be solved by combining the numerical method of singular integral equation with the ordinary boundary element method. Further use the numerical method for Laplace transform, several typical examples are calculated and their dynamic stress intensity factors are obtained. The results show that the method proposed is successful and can be used to solve more complicated problems.
Resumo:
A new numerical procedure is proposed to investigate cracking behaviors induced by mismatch between the matrix phase and aggregates due to matrix shrinkage in cement-based composites. This kind of failure processes is simplified in this investigation as a purely spontaneous mechanical problem, therefore, one main difficulty during simulating the phenomenon lies that no explicit external load serves as the drive to propel development of this physical process. As a result, it is different from classical mechanical problems and seems hard to be solved by using directly the classical finite element method (FEM), a typical kind of "load -> medium -> response" procedures. As a solution, the actual mismatch deformation field is decomposed into two virtual fields, both of which can be obtained by the classical FEM. Then the actual response is obtained by adding together the two virtual displacement fields based on the principle of superposition. Then, critical elements are detected successively by the event-by-event technique. The micro-structure of composites is implemented by employing the generalized beam (GB) lattice model. Numerical examples are given to show the effectiveness of the method, and detailed discussions are conducted on influences of material properties.
Resumo:
The aim of this paper is to present fixed point result of mappings satisfying a generalized rational contractive condition in the setup of multiplicative metric spaces. As an application, we obtain a common fixed point of a pair of weakly compatible mappings. Some common fixed point results of pair of rational contractive types mappings involved in cocyclic representation of a nonempty subset of a multiplicative metric space are also obtained. Some examples are presented to support the results proved herein. Our results generalize and extend various results in the existing literature.
Resumo:
This paper is concerned with the role of information in the servitization of manufacturing which has led to “the innovation of an organisation’s capabilities and processes as equipment manufacturers seek to offer services around their products” (Neely 2009, Baines et al 2009). This evolution has resulted in an information requirement (IR) shift as companies move from discrete provision of equipment and spare parts to long-term service contracts guaranteeing prescribed performance levels. Organisations providing such services depend on a very high level of availability and quality of information throughout the service life-cycle (Menor et al 2002). This work focuses on whether, for a proposed contract based around complex equipment, the Information System is capable of providing information at an acceptable quality and requires the IRs to be examined in a formal manner. We apply a service information framework (Cuthbert et al 2008, McFarlane & Cuthbert 2012) to methodically assess IRs for different contract types to understand the information gap between them. Results from case examples indicate that this gap includes information required for the different contract types and a set of contract-specific IRs. Furthermore, the control, ownership and use of information differs across contract types as the boundary of operation and responsibility changes.
Resumo:
Type-omega DPLs (Denotational Proof Languages) are languages for proof presentation and search that offer strong soundness guarantees. LCF-type systems such as HOL offer similar guarantees, but their soundness relies heavily on static type systems. By contrast, DPLs ensure soundness dynamically, through their evaluation semantics; no type system is necessary. This is possible owing to a novel two-tier syntax that separates deductions from computations, and to the abstraction of assumption bases, which is factored into the semantics of the language and allows for sound evaluation. Every type-omega DPL properly contains a type-alpha DPL, which can be used to present proofs in a lucid and detailed form, exclusively in terms of primitive inference rules. Derived inference rules are expressed as user-defined methods, which are "proof recipes" that take arguments and dynamically perform appropriate deductions. Methods arise naturally via parametric abstraction over type-alpha proofs. In that light, the evaluation of a method call can be viewed as a computation that carries out a type-alpha deduction. The type-alpha proof "unwound" by such a method call is called the "certificate" of the call. Certificates can be checked by exceptionally simple type-alpha interpreters, and thus they are useful whenever we wish to minimize our trusted base. Methods are statically closed over lexical environments, but dynamically scoped over assumption bases. They can take other methods as arguments, they can iterate, and they can branch conditionally. These capabilities, in tandem with the bifurcated syntax of type-omega DPLs and their dynamic assumption-base semantics, allow the user to define methods in a style that is disciplined enough to ensure soundness yet fluid enough to permit succinct and perspicuous expression of arbitrarily sophisticated derived inference rules. We demonstrate every major feature of type-omega DPLs by defining and studying NDL-omega, a higher-order, lexically scoped, call-by-value type-omega DPL for classical zero-order natural deduction---a simple choice that allows us to focus on type-omega syntax and semantics rather than on the subtleties of the underlying logic. We start by illustrating how type-alpha DPLs naturally lead to type-omega DPLs by way of abstraction; present the formal syntax and semantics of NDL-omega; prove several results about it, including soundness; give numerous examples of methods; point out connections to the lambda-phi calculus, a very general framework for type-omega DPLs; introduce a notion of computational and deductive cost; define several instrumented interpreters for computing such costs and for generating certificates; explore the use of type-omega DPLs as general programming languages; show that DPLs do not have to be type-less by formulating a static Hindley-Milner polymorphic type system for NDL-omega; discuss some idiosyncrasies of type-omega DPLs such as the potential divergence of proof checking; and compare type-omega DPLs to other approaches to proof presentation and discovery. Finally, a complete implementation of NDL-omega in SML-NJ is given for users who want to run the examples and experiment with the language.