857 resultados para Implicit-implicit-implicit intersection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse est divisée en trois chapitres. Le premier explique comment utiliser la méthode «level-set» de manière rigoureuse pour faire la simulation de feux de forêt en utilisant comme modèle physique pour la propagation le modèle de l'ellipse de Richards. Le second présente un nouveau schéma semi-implicite avec une preuve de convergence pour la solution d'une équation de type Hamilton-Jacobi anisotrope. L'avantage principal de cette méthode est qu'elle permet de réutiliser des solutions à des problèmes «proches» pour accélérer le calcul. Une autre application de ce schéma est l'homogénéisation. Le troisième chapitre montre comment utiliser les méthodes numériques des deux premiers chapitres pour étudier l'influence de variations à petites échelles dans la vitesse du vent sur la propagation d'un feu de forêt à l'aide de la théorie de l'homogénéisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente étude conduit les traditions fragmentées de la culture littéraire de Trieste vers les préoccupations contemporaines de la littérature mondiale à l’époque actuelle où la mondialisation est largement perçue comme le paradigme historique prédominant de la modernité. Ce que j’appelle la « littérature globalisée » renvoie à la refonte de la Weltliteratur – envisagée par Goethe et traduite comme « world literature » ou la « littérature universelle » – par des discours sur la culture mondiale et le post-nationalisme. Cependant, lorsque les études littéraires posent les questions de la « littérature globalisée », elles sont confrontées à un problème : le passage de l’idée universelle inhérente au paradigme de Goethe entre le Scylla d’un internationalisme relativiste et occidental, et le Charybde d’un mondialisme atopique et déshumanisé. Les spécialistes de la littérature mondiale qui tendent vers la première position acquièrent un fondement institutionnel en travaillant avec l’hypothèse implicite selon laquelle les nations sont fondées sur les langues nationales, ce qui souscrit à la relation entre la littérature mondiale et les littératures nationales. L’universalité de cette hypothèse implicite est réfutée par l’écriture triestine. Dans cette étude, je soutiens que l’écriture triestine du début du XXe siècle agit comme un précurseur de la réflexion sur la culture littéraire globalisée du XXIe siècle. Elle dispose de sa propre économie de sens, de sorte qu’elle n’entre pas dans les nationalismes littéraires, mais elle ne tombe pas non plus dans le mondialisme atopique. Elle n’est pas catégoriquement opposée à la littérature nationale; mais elle ne permet pas aux traditions nationales de prendre racine. Les écrivains de Triestine exprimaient le désir d’un sentiment d’unité et d’appartenance, ainsi que celui d’une conscience critique qui dissout ce désir. Ils résistaient à l’idéalisation de ces particularismes et n’ont jamais réussi à réaliser la coalescence de ses écrits dans une tradition littéraire unifiée. Par conséquent, Trieste a souvent été considérée comme un non-lieu et sa littérature comme une anti-littérature. En contournant les impératifs territoriaux de la tradition nationale italienne – comme il est illustré par le cas de Italo Svevo – l’écriture triestine a été ultérieurement incluse dans les paramètres littéraires et culturels de la Mitteleuropa, où son expression a été imaginée comme un microcosme de la pluralité supranationale de l’ancien Empire des Habsbourg. Toutefois, le macrocosme projeté de Trieste n’est pas une image unifiée, comme le serait un globe; mais il est plutôt une nébuleuse planétaire – selon l’image de Svevo – où aucune idéalisation universalisante ne peut se réaliser. Cette étude interroge l’image de la ville comme un microcosme et comme un non-lieu, comme cela se rapporte au macrocosme des atopies de la mondialisation, afin de démontrer que l’écriture de Trieste est la littérature globalisée avant la lettre. La dialectique non résolue entre faire et défaire la langue littéraire et l’identité à travers l’écriture anime la culture littéraire de Trieste, et son dynamisme contribue aux débats sur la mondialisation et les questions de la culture en découlant. Cette étude de l’écriture triestine offre des perspectives critiques sur l’état des littératures canoniques dans un monde où les frontières disparaissent et les non-lieux se multiplient. L’image de la nébuleuse planétaire devient possiblement celle d’un archétype pour le monde globalisé d’aujourd’hui.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces and examines the logicist construction of Peano Arithmetic that can be performed into Leśniewski’s logical calculus of names called Ontology. Against neo-Fregeans, it is argued that a logicist program cannot be based on implicit definitions of the mathematical concepts. Using only explicit definitions, the construction to be presented here constitutes a real reduction of arithmetic to Leśniewski’s logic with the addition of an axiom of infinity. I argue however that such a program is not reductionist, for it only provides what I will call a picture of arithmetic, that is to say a specific interpretation of arithmetic in which purely logical entities play the role of natural numbers. The reduction does not show that arithmetic is simply a part of logic. The process is not of ontological significance, for numbers are not shown to be logical entities. This neo-logicist program nevertheless shows the existence of a purely analytical route to the knowledge of arithmetical laws.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we have formulated the Dalgarno-Lewis procedure for two-and three-photon processes and an elegant alternate expressions are derived. Starting from a brief review on various multiphoton processes we have discussed the difficulties coming in the perturbative treatment of multiphoton processes. A small discussion on various available methods for studying multiphoton processes are presented in chapter 2. These theoretical treatments mainly concentrate on the evaluation of the higher order matrix elements coming in the perturbation theory. In chapter 3 we have described the use of Dalgarno-Lewis procedure and its implimentation on second order matrix elements. The analytical expressions for twophoton transition amplitude, two-photon ionization cross section, dipole dynamic polarizability and Kramers-Heiseberg are obtained in a unified manner. Fourth chapter is an extension of the implicit summation technique presented in chapter 3. We have clearly mentioned the advantage of our method, especially the analytical continuation of the relevant expressions suited for various values of radiation frequency which is also used for efficient numerical analysis. A possible extension of the work is to study various multiphoton processcs from the stark shifted first excited states of hydrogen atom. We can also extend this procedure for studying multiphoton processes in alkali atoms as well as Rydberg atoms. Also, instead of going for analytical expressions, one can try a complete numerical evaluation of the higher order matrix elements using this procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The difficulties arising in the calculation of the nuclear curvature energy are analyzed in detail, especially with reference to relativistic models. It is underlined that the implicit dependence on curvature of the quantal wave functions is directly accessible only in a semiclassical framework. It is shown that also in the relativistic models quantal and semiclassical calculations of the curvature energy are in good agreement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a sigma-delta analog to digital (A/D) As most of the sigma-delta ADC applications require converter, the most computationally intensive block is decimation filters with linear phase characteristics, the decimation filter and its hardware implementation symmetric Finite Impulse Response (FIR) filters are may require millions of transistors. Since these widely used for implementation. But the number of FIR converters are now targeted for a portable application, filter coefficients will be quite large for implementing a a hardware efficient design is an implicit requirement. narrow band decimation filter. Implementing decimation In this effect, this paper presents a computationally filter in several stages reduces the total number of filter efficient polyphase implementation of non-recursive coefficients, and hence reduces the hardware complexity cascaded integrator comb (CIC) decimators for and power consumption [2]. Sigma-Delta Converters (SDCs). The SDCs are The first stage of decimation filter can be operating at high oversampling frequencies and hence implemented very efficiently using a cascade of integrators require large sampling rate conversions. The filtering and comb filters which do not require multiplication or and rate reduction are performed in several stages to coefficient storage. The remaining filtering is performed reduce hardware complexity and power dissipation. either in single stage or in two stages with more complex The CIC filters are widely adopted as the first stage of FIR or infinite impulse response (IIR) filters according to decimation due to its multiplier free structure. In this the requirements. The amount of passband aliasing or research, the performance of polyphase structure is imaging error can be brought within prescribed bounds by compared with the CICs using recursive and increasing the number of stages in the CIC filter. The non-recursive algorithms in terms of power, speed and width of the passband and the frequency characteristics area. This polyphase implementation offers high speed outside the passband are severely limited. So, CIC filters operation and low power consumption. The polyphase are used to make the transition between high and low implementation of 4th order CIC filter with a sampling rates. Conventional filters operating at low decimation factor of '64' and input word length of sampling rate are used to attain the required transition '4-bits' offers about 70% and 37% of power saving bandwidth and stopband attenuation. compared to the corresponding recursive and Several papers are available in literature that deals non-recursive implementations respectively. The same with different implementations of decimation filter polyphase CIC filter can operate about 7 times faster architecture for sigma-delta ADCs. Hogenauer has than the recursive and about 3.7 times faster than the described the design procedures for decimation and non-recursive CIC filters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An Overview of known spatial clustering algorithms The space of interest can be the two-dimensional abstraction of the surface of the earth or a man-made space like the layout of a VLSI design, a volume containing a model of the human brain, or another 3d-space representing the arrangement of chains of protein molecules. The data consists of geometric information and can be either discrete or continuous. The explicit location and extension of spatial objects define implicit relations of spatial neighborhood (such as topological, distance and direction relations) which are used by spatial data mining algorithms. Therefore, spatial data mining algorithms are required for spatial characterization and spatial trend analysis. Spatial data mining or knowledge discovery in spatial databases differs from regular data mining in analogous with the differences between non-spatial data and spatial data. The attributes of a spatial object stored in a database may be affected by the attributes of the spatial neighbors of that object. In addition, spatial location, and implicit information about the location of an object, may be exactly the information that can be extracted through spatial data mining

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a first order implicit time stepping procedure (Euler scheme) for the non-stationary Stokes equations in smoothly bounded domains of R3. Using energy estimates we can prove optimal convergence properties in the Sobolev spaces Hm(G) (m = 0;1;2) uniformly in time, provided that the solution of the Stokes equations has a certain degree of regularity. For the solution of the resulting Stokes resolvent boundary value problems we use a representation in form of hydrodynamical volume and boundary layer potentials, where the unknown source densities of the latter can be determined from uniquely solvable boundary integral equations’ systems. For the numerical computation of the potentials and the solution of the boundary integral equations a boundary element method of collocation type is used. Some simulations of a model problem are carried out and illustrate the efficiency of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider numerical methods for the compressible time dependent Navier-Stokes equations, discussing the spatial discretization by Finite Volume and Discontinuous Galerkin methods, the time integration by time adaptive implicit Runge-Kutta and Rosenbrock methods and the solution of the appearing nonlinear and linear equations systems by preconditioned Jacobian-Free Newton-Krylov, as well as Multigrid methods. As applications, thermal Fluid structure interaction and other unsteady flow problems are considered. The text is aimed at both mathematicians and engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As exploration of our solar system and outerspace move into the future, spacecraft are being developed to venture on increasingly challenging missions with bold objectives. The spacecraft tasked with completing these missions are becoming progressively more complex. This increases the potential for mission failure due to hardware malfunctions and unexpected spacecraft behavior. A solution to this problem lies in the development of an advanced fault management system. Fault management enables spacecraft to respond to failures and take repair actions so that it may continue its mission. The two main approaches developed for spacecraft fault management have been rule-based and model-based systems. Rules map sensor information to system behaviors, thus achieving fast response times, and making the actions of the fault management system explicit. These rules are developed by having a human reason through the interactions between spacecraft components. This process is limited by the number of interactions a human can reason about correctly. In the model-based approach, the human provides component models, and the fault management system reasons automatically about system wide interactions and complex fault combinations. This approach improves correctness, and makes explicit the underlying system models, whereas these are implicit in the rule-based approach. We propose a fault detection engine, Compiled Mode Estimation (CME) that unifies the strengths of the rule-based and model-based approaches. CME uses a compiled model to determine spacecraft behavior more accurately. Reasoning related to fault detection is compiled in an off-line process into a set of concurrent, localized diagnostic rules. These are then combined on-line along with sensor information to reconstruct the diagnosis of the system. These rules enable a human to inspect the diagnostic consequences of CME. Additionally, CME is capable of reasoning through component interactions automatically and still provide fast and correct responses. The implementation of this engine has been tested against the NEAR spacecraft advanced rule-based system, resulting in detection of failures beyond that of the rules. This evolution in fault detection will enable future missions to explore the furthest reaches of the solar system without the burden of human intervention to repair failed components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.