869 resultados para Theoretical analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a derivation and, based on it, an extension of a model originally proposed by V.G. Niziev to describe continuous wave laser cutting of metals. Starting from a local energy balance and by incorporating heat removal through heat conduction to the bulk material, we find a differential equation for the cutting profile. This equation is solved numerically and yields, besides the cutting profiles, the maximum cutting speed, the absorptivity profiles, and other relevant quantities. Our main goal is to demonstrate the model’s capability to explain some of the experimentally observed differences between laser cutting at around 1 and 10 μm wavelengths. To compare our numerical results to experimental observations, we perform simulations for exactly the same material and laser beam parameters as those used in a recent comparative experimental study. Generally, we find good agreement between theoretical and experimental results and show that the main differences between laser cutting with 1- and 10-μm beams arise from the different absorptivity profiles and absorbed intensities. Especially the latter suggests that the energy transfer, and thus the laser cutting process, is more efficient in the case of laser cutting with 1-μm beams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [Prog. Theor. Phys. 92 (1994), 939] boundary integrals. The analysis has been carried out by studying the convergence of the first- and second-order differential operators as the smoothing length (that is, the characteristic length on which relies the SPH interpolation) decreases. These differential operators are of fundamental importance for the computation of the viscous drag and the viscous/diffusive terms in the momentum and energy equations. It has been proved that close to the boundaries some of the mirroring techniques leads to intrinsic inaccuracies in the convergence of the differential operators. A consistent formulation has been derived starting from Takeda et al. boundary integrals (see the above reference). This original formulation allows implementing no-slip boundary conditions consistently in many practical applications as viscous flows and diffusion problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theoretical formulation of the smoothed particle hydrodynamics (SPH) method deserves great care because of some inconsistencies occurring when considering free-surface inviscid flows. Actually, in SPH formulations one usually assumes that (i) surface integral terms on the boundary of the interpolation kernel support are neglected, (ii) free-surface conditions are implicitly verified. These assumptions are studied in detail in the present work for free-surface Newtonian viscous flow. The consistency of classical viscous weakly compressible SPH formulations is investigated. In particular, the principle of virtual work is used to study the verification of the free-surface boundary conditions in a weak sense. The latter can be related to the global energy dissipation induced by the viscous term formulations and their consistency. Numerical verification of this theoretical analysis is provided on three free-surface test cases including a standing wave, with the three viscous term formulations investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the dielectrophoretic potential created by the evanescent electric field acting on a particle near a photovoltaic crystal surface depending on the crystal cut. This electric field is obtained from the steady state solution of the Kukhtarev equations for the photovoltaic effect, where the diffusion term has been disregarded. First, the space charge field generated by a small, square, light spot where d _ l (being d a side of the square and l the crystal thickness) is studied. The surface charge density generated in both geometries is calculated and compared as their relation determines the different properties of the dielectrophoretic potential for both cuts. The shape of the dielectrophoretic potential is obtained and compared for several distances to the sample. Afterwards other light patterns are studied by the superposition of square spots, and the resulting trapping profiles are analysed. Finally the surface charge densities and trapping profiles for different d/l relations are studied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the cytoplasm of cells of different types, discrete clusters of inositol 1,4,5-trisphosphate-sensitive Ca2+ channels generate Ca2+ signals of graded size, ranging from blips, which involve the opening of only one channel, to moderately larger puffs, which result from the concerted opening of a few channels in the same cluster. These channel clusters are of unknown size or geometrical characteristics. The aim of this study was to estimate the number of channels and the interchannel distance within such a cluster. Because these characteristics are not attainable experimentally, we performed computer stochastic simulations of Ca2+ release events. We conclude that, to ensure efficient interchannel communication, as experimentally observed, a typical cluster should contain two or three tens of inositol 1,4,5-trisphosphate-sensitive Ca2+ channels in close contact.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Aeronautical Research Laboratory. Contract no. AF 33(616)-7064. Project no. 7064."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a group theoretical analysis of several classes of organic superconductor. We predict that highly frustrated organic superconductors, such as K-(ET)(2)Cu-2(CN)(3) (where ET is BEDT-TTF, bis(ethylenedithio) tetrathiafulvalene) and beta'-[Pd(dmit)(2)](2)X, undergo two superconducting phase transitions, the first from the normal state to a d-wave superconductor and the second to a d + id state. We show that the monoclinic distortion of K-(ET)(2)Cu(NCS)(2) means that the symmetry of its superconducting order parameter is different from that of orthorhombic-K-(ET)(2)Cu[N(CN)(2)] Br. We propose that beta'' and theta phase organic superconductors have d(xy) + s order parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the 'global' mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy. © 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitric Oxide (NO) is produced in the vascular endothelium where it then diffuses to the adjacent smooth muscle cells (SMC) activating agents known to regulate vascular tone. The close proximity of the site of NO production to the red blood cells (RBC) and its known fast consumption by hemoglobin, suggests that the blood will scavenge most of the NO produced. Therefore, it is unclear how NO is able to play its role in accomplishing vasodilation. Investigation of NO production and consumption rates will allow insight into this paradox. DAF-FM is a sensitive NO fluorescence probe widely used for qualitative assessment of cellular NO production. With the aid of a mathematical model of NO/DAF-FM reaction kinetics, experimental studies were conducted to calibrate the fluorescence signal showing that the slope of fluorescent intensity is proportional to [NO]2 and exhibits a saturation dependence on [DAF-FM]. In addition, experimental data exhibited a Km dependence on [NO]. This finding was incorporated into the model elucidating NO 2 as the possible activating agent of DAF-FM. A calibration procedure was formed and applied to agonist stimulated cells, providing an estimated NO release rate of 0.418 ± 0.18 pmol/cm2s. To assess NO consumption by RBCs, measurements of the rate of NO consumption in a gas stream flowing on top of an RBC solution of specified Hematocrit (Hct) was performed. The consumption rate constant (kbl)in porcine RBCs at 25°C and 45% Hct was estimated to be 3500 + 700 s-1. kbl is highly dependent on Hct and can reach up to 9900 + 4000 s-1 for 60% Hct. The nonlinear dependence of kbl on Hct suggests a predominant role for extracellular diffusion in limiting NO uptake. Further simulations showed a linear relationship between varying NO production rates and NO availability in the SMCs utilizing the estimated NO consumption rate. The corresponding SMC [NO] level for the average NO production rate estimated was approximately 15.1 nM. With the aid of experimental and theoretical methods we were able to examine the NO paradox and exhibit that endothelial derived NO is able to escape scavenging by RBCs to diffuse to the SMCs.