972 resultados para classical rational theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the ocean, natural and artificial processes generate clouds of bubbles which scatter and attenuate sound. Measurements have shown that at the individual bubble resonance frequency, sound propagation in this medium is highly attenuated and dispersive. Theory to explain this behavior exists in the literature, and is adequate away from resonance. However, due to excessive attenuation near resonance, little experimental data exists for comparison. An impedance tube was developed specifically for exploring this regime. Using the instrument, unique phase speed and attenuation measurements were made for void fractions ranging from 6.2 × 10^−5 to 2.7 × 10^−3 and bubble sizes centered around 0.62 mm in radius. Improved measurement speed, accuracy and precision is possible with the new instrument, and both instantaneous and time-averaged measurements were obtained. Behavior at resonance was observed to be sensitive to the bubble population statistics and agreed with existing theory, within the uncertainty of the bubble population parameters. Scattering from acoustically compact bubble clouds can be predicted from classical scattering theory by using an effective medium description of the bubbly fluid interior. Experimental verification was previously obtained up to the lowest resonance frequency. A novel bubble production technique has been employed to obtain unique scattering measurements with a bubbly-liquid-filled latex tube in a large indoor tank. The effective scattering model described these measurements up to three times the lowest resonance frequency of the structure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Large probabilistic graphs arise in various domains spanning from social networks to biological and communication networks. An important query in these graphs is the k nearest-neighbor query, which involves finding and reporting the k closest nodes to a specific node. This query assumes the existence of a measure of the "proximity" or the "distance" between any two nodes in the graph. To that end, we propose various novel distance functions that extend well known notions of classical graph theory, such as shortest paths and random walks. We argue that many meaningful distance functions are computationally intractable to compute exactly. Thus, in order to process nearest-neighbor queries, we resort to Monte Carlo sampling and exploit novel graph-transformation ideas and pruning opportunities. In our extensive experimental analysis, we explore the trade-offs of our approximation algorithms and demonstrate that they scale well on real-world probabilistic graphs with tens of millions of edges.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common problem faced by fire safety engineers in the field of evacuation analysis concerns the optimal design of an arbitrarily complex structure in order to minimise evacuation times. How does the engineer determine the best solution? In this study we introduce the concept of numerical optimisation techniques to address this problem. The study makes user of the buildingEXODUS evacuation model coupled with classical optimisation theory including Design of Experiments (DoE) and Response Surface Models (RSM). We demonstrate the technique using a relatively simple problem of determining the optimal location for a single exit in a square room.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

By molecular dynamics (MD) simulations we study the crystallization process in a model system whose particles interact by a spherical pair potential with a narrow and deep attractive well adjacent to a hard repulsive core. The phase diagram of the model displays a solid-fluid equilibrium, with a metastable fluid-fluid separation. Our computations are restricted to fairly small systems (from 2592 to 10368 particles) and cover long simulation times, with constant energy trajectories extending up to 76x10(6) MD steps. By progressively reducing the system temperature below the solid-fluid line, we first observe the metastable fluid-fluid separation, occurring readily and almost reversibly upon crossing the corresponding line in the phase diagram. The nucleation of the crystal phase takes place when the system is in the two-fluid metastable region. Analysis of the temperature dependence of the nucleation time allows us to estimate directly the nucleation free energy barrier. The results are compared with the predictions of classical nucleation theory. The critical nucleus is identified, and its structure is found to be predominantly fcc. Following nucleation, the solid phase grows steadily across the system, incorporating a large number of localized and extended defects. We discuss the relaxation processes taking place both during and after the crystallization stage. The relevance of our simulation for the kinetics of protein crystallization under normal experimental conditions is discussed. (C) 2002 American Institute of Physics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents an analytical model for the prediction of the elastic behaviour of plain-weave fabric composites. The fabric is a hybrid plain-weave with different materials and undulations in the warp and weft directions. The derivation of the effective material properties is based on classical laminate theory (CLT).

The theoretical predictions have been compared with experimental results and predictions using alternative models available in the literature. Composite laminates were manufactured using the resin infusion under flexible tooling (RIFT) process and tested under tension and in-plane shear loading to validate the model. A good correlation between theoretical and experimental results for the prediction of in-plane properties was obtained. The limitations of the existing theoretical models based on classical laminate theory (CLT) for predicting the out-of-plane mechanical properties are presented and discussed. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the dynamics of localized solutions of the relativistic cold-fluid plasma model in the small but finite amplitude limit, for slightly overcritical plasma density. Adopting a multiple scale analysis, we derive a perturbed nonlinear Schrodinger equation that describes the evolution of the envelope of circularly polarized electromagnetic field. Retaining terms up to fifth order in the small perturbation parameter, we derive a self-consistent framework for the description of the plasma response in the presence of localized electromagnetic field. The formalism is applied to standing electromagnetic soliton interactions and the results are validated by simulations of the full cold-fluid model. To lowest order, a cubic nonlinear Schrodinger equation with a focusing nonlinearity is recovered. Classical quasiparticle theory is used to obtain analytical estimates for the collision time and minimum distance of approach between solitons. For larger soliton amplitudes the inclusion of the fifth-order terms is essential for a qualitatively correct description of soliton interactions. The defocusing quintic nonlinearity leads to inelastic soliton collisions, while bound states of solitons do not persist under perturbations in the initial phase or amplitude

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The properties of the interface between solid and melt are key to solidification and melting, as the interfacial free energy introduces a kinetic barrier to phase transitions. This makes solidification happen below the melting temperature, in out-of-equilibrium conditions at which the interfacial free energy is ill defined. Here we draw a connection between the atomistic description of a diffuse solid-liquid interface and its thermodynamic characterization. This framework resolves the ambiguities in defining the solid-liquid interfacial free energy above and below the melting temperature. In addition, we introduce a simulation protocol that allows solid-liquid interfaces to be reversibly created and destroyed at conditions relevant for experiments. We directly evaluate the value of the interfacial free energy away from the melting point for a simple but realistic atomic potential, and find a more complex temperature dependence than the constant positive slope that has been generally assumed based on phenomenological considerations and that has been used to interpret experiments. This methodology could be easily extended to the study of other phase transitions, from condensation to precipitation. Our analysis can help reconcile the textbook picture of classical nucleation theory with the growing body of atomistic studies and mesoscale models of solidification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study, we introduce an original distance definition for graphs, called the Markov-inverse-F measure (MiF). This measure enables the integration of classical graph theory indices with new knowledge pertaining to structural feature extraction from semantic networks. MiF improves the conventional Jaccard and/or Simpson indices, and reconciles both the geodesic information (random walk) and co-occurrence adjustment (degree balance and distribution). We measure the effectiveness of graph-based coefficients through the application of linguistic graph information for a neural activity recorded during conceptual processing in the human brain. Specifically, the MiF distance is computed between each of the nouns used in a previous neural experiment and each of the in-between words in a subgraph derived from the Edinburgh Word Association Thesaurus of English. From the MiF-based information matrix, a machine learning model can accurately obtain a scalar parameter that specifies the degree to which each voxel in (the MRI image of) the brain is activated by each word or each principal component of the intermediate semantic features. Furthermore, correlating the voxel information with the MiF-based principal components, a new computational neurolinguistics model with a network connectivity paradigm is created. This allows two dimensions of context space to be incorporated with both semantic and neural distributional representations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper deals with the geometrically non linear analysis of thin plate/shell laminated structures with embedded integrated piezoelectric actuors or sensors layers and/or patches.The model is based on the Kirchhoff classical laminated theory and can be applied to plate and shell adaptive structures with arbitrary shape, general mechanical and electrical loadings. the finite element model is a nonconforming single layer triangular plate/shell element with 18 degrees of fredom for the generalized displacements and one eçlectrical potential degree of freedom for each piezoelectric layer or patch. An updated Lagrangian formulation associated to Newton-Raphson technique is used to solve incrementally and iteratively the equilibrium equation.The model is applied in the solution of four illustrative cases, and the results are compared and discussedwith alternative solutions when available.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A finite element formulation for active vibration control of thin plate laminated structures with integrated piezoelectric layers, acting as sensors and actuators in presented. The finite element model is a nonconforming single layer triangular plate/shell element with 18 degrees of freedom for the generalized displacements and one electrical potential degree of freedom for each piezoelectric element layer, and is based on the kirchhoff classical laminated theory. To achieve a mechanism of active control of the structure dynamic response, a feedback control algorithm is used, coupling the sensor and active piezoelectric layers, and Newmark method is used to calculate yhe dynamic response of the laminated structures. The model is applied in the solution of several illustrative cases, and the results are presented and discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the new package entitled Simulator of Intelligent Transportation Systems (SITS) and a computational oriented analysis of traffic dynamics. The SITS adopts a microscopic simulation approach to reproduce real traffic conditions considering different types of vehicles, drivers and roads. A set of experiments with the SITS reveal the dynamic phenomena exhibited by this kind of system. For this purpose a modelling formalism is developed that embeds the statistics and the Laplace transform. The results make possible the adoption of classical system theory tools and point out that it is possible to study traffic systems taking advantage of the knowledge gathered with automatic control algorithms. A complementary perspective for the analysis of the traffic flow is also quantified through the entropy measure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Un résumé en français est également disponible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dans cet article, l’auteur Ejan Mackaay présente les caractéristiques fondamentales du cyberespace et analyse les relations économiques et juridiques entre les acteurs du marché virtuel de l’Internet. Cette analyse s'inscrit en marge des travaux de Niva Elkin-Koren et Eli Salzberger, dont elle suit le plan. D'une part, il y est précisé que le marché virtuel de l’Internet remet en question l’analyse classique des interactions entre les acteurs économiques. La nouvelle analyse néo-institutionnel présente un cadre analytique qui relève plus adéquatement les relations complexes entre les acteurs économiques du marché virtuel que les théories économiques classiques. Cette nouvelle approche se fonde sur le concept que les acteurs économiques utilisent les ressources afin d’être intégrés au sein des institutions les plus actives et efficaces. D'autre part, il est fait mention que le cyberespace présente plusieurs caractéristiques d’un marché économique. Toutefois, étant virtuel, le cyberespace ne présente pas les mêmes limites qu’un marché physique. En effet, dans ce dernier, certaines limites physiques imposent diverses règles de comportement. Le législateur doit donc prendre conscience de l’absence de telles limites et des normes qu’elles imposaient afin de légiférer adéquatement sur les échanges dans le cyberespace. Ensuite, afin d’illustrer les divergences entre les marchés physiques et virtuels, une analyse est faite au regard des principaux échecs de marchés, soit l’établissement d’un monopole, l’accès aux biens publics, les informations imparfaites et les externalités négatives. Un monopole est un échec de marché qui restreint considérablement la compétition, peut être accrut par l’effet boule de neige et, s’il n’est pas contrôlé, peut mener à un effet de blocage ou d’exclusion de certains acteurs. Le second échec analysé est l’accès aux biens publics. Dans le cyberespace, le principal bien public est l’information qui peut être échangée entre les utilisateurs. Toutefois, certaines règles de droits d’auteur et de propriété intellectuelle peuvent considérablement limiter l’accès à ce bien. L’information incomplète des acteurs économiques constitue un autre échec de marché, mais le cyberespace offre plusieurs moyens d’accéder à l’information pertinente aux transactions éclairées. Enfin, les externalités négatives peuvent généralement être considérées comme des effets secondaires des échanges commerciaux. Toutefois il est souligné que ces dernières ont un effet très limité dans le cyberespace, étant donné le plus grand nombre d’options de retrait et la facilité accrue de l’exercer. Enfin, il est rappelé que le commerce électronique et le cyberespace remettent en questions toutes les théories économiques et politiques traditionnelles et offrent une perspective nouvelle sur le phénomène de la formation des normes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Une compréhension profonde de la séparation de charge à l’hétérojonction de semi-con- ducteurs organiques est nécessaire pour le développement de diodes photovoltaïques organiques plus efficaces, ce qui serait une grande avancée pour répondre aux besoins mondiaux en énergie durable. L’objectif de cette thèse est de décrire les processus impliqués dans la séparation de charges à hétérojonctions de semi-conducteurs organiques, en prenant en exemple le cas particulier du PCDTBT: PCBM. Nous sondons les excitations d’interface à l’aide de méthodes spectroscopiques résolues en temps couvrant des échelles de temps de 100 femto- secondes à 1 milliseconde. Ces principales méthodes spectroscopiques sont la spectroscopie Raman stimulée femtoseconde, la fluorescence résolue en temps et l’absorption transitoire. Nos résultats montrent clairement que le transfert de charge du PCDTBT au PCBM a lieu avant que l’exciton ne soit relaxé et localisé, un fait expérimental irréconciliable avec la théorie de Marcus semi-classique. La paire de charges qui est créée se divise en deux catégories : les paires de polarons géminales non piégées et les paires profondément piégées. Les premiers se relaxent rapidement vers l’exciton à transfert de charge, qui se recombine radiativement avec une constante de temps de 1– 2 nanoseconde, alors que les seconds se relaxent sur de plus longues échelles de temps via l’effet tunnel. Notre modèle photophysique quantitatif démontre que 2 % de l’excitation créée ne peut jamais se dissocier en porteurs de charge libre, un chiffre qui est en accord avec les rendements élevés rapportés pour ce type de système.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les implications philosophiques de la Théorie de la Perspective de 1979, notamment celles qui concernent l’introduction d’une fonction de valeur sur les résultats et d’un coefficient de pondération sur les probabilités, n’ont à ce jour jamais été explorées. Le but de ce travail est de construire une théorie philosophique de la volonté à partir des résultats de la Théorie de la Perspective. Afin de comprendre comment cette théorie a pu être élaborée il faut étudier la Théorie de l’Utilité Attendue dont elle est l’aboutissement critique majeur, c’est-à-dire les axiomatisations de la décision de Ramsey (1926), von Neumann et Morgenstern (1947), et enfin Savage (1954), qui constituent les fondements de la théorie classique de la décision. C’est entre autres la critique – par l’économie et la psychologie cognitive – du principe d’indépendance, des axiomes d’ordonnancement et de transitivité qui a permis de faire émerger les éléments représentationnels subjectifs à partir desquels la Théorie de la Perspective a pu être élaborée. Ces critiques ont été menées par Allais (1953), Edwards (1954), Ellsberg (1961), et enfin Slovic et Lichtenstein (1968), l’étude de ces articles permet de comprendre comment s’est opéré le passage de la Théorie de l’Utilité Attendue, à la Théorie de la Perspective. À l’issue de ces analyses et de celle de la Théorie de la Perspective est introduite la notion de Système de Référence Décisionnel, qui est la généralisation naturelle des concepts de fonction de valeur et de coefficient de pondération issus de la Théorie de la Perspective. Ce système, dont le fonctionnement est parfois heuristique, sert à modéliser la prise de décision dans l’élément de la représentation, il s’articule autour de trois phases : la visée, l’édition et l’évaluation. À partir de cette structure est proposée une nouvelle typologie des décisions et une explication inédite des phénomènes d’akrasie et de procrastination fondée sur les concepts d’aversion au risque et de surévaluation du présent, tous deux issus de la Théorie de la Perspective.