989 resultados para HIGGS PHYSICS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many findings from research as well as reports from teachers describe students' problem solving strategies as manipulation of formulas by rote. The resulting dissatisfaction with quantitative physical textbook problems seems to influence the attitude towards the role of mathematics in physics education in general. Mathematics is often seen as a tool for calculation which hinders a conceptual understanding of physical principles. However, the role of mathematics cannot be reduced to this technical aspect. Hence, instead of putting mathematics away we delve into the nature of physical science to reveal the strong conceptual relationship between mathematics and physics. Moreover, we suggest that, for both prospective teaching and further research, a focus on deeply exploring such interdependency can significantly improve the understanding of physics. To provide a suitable basis, we develop a new model which can be used for analysing different levels of mathematical reasoning within physics. It is also a guideline for shifting the attention from technical to structural mathematical skills while teaching physics. We demonstrate its applicability for analysing physical-mathematical reasoning processes with an example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a nonequilibrium phase transition which is not allowed in the model. We propose an excitable-wave mean-field approximation which shows good agreement with previously published simulation results [Gollo et al., PLoS Comput. Biol. 5, e1000402 (2009)] and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This is a research paper in which we discuss “active learning” in the light of Cultural-Historical Activity Theory (CHAT), a powerful framework to analyze human activity, including teaching and learning process and the relations between education and wider human dimensions as politics, development, emancipation etc. This framework has its origin in Vygotsky's works in the psychology, supported by a Marxist perspective, but nowadays is a interdisciplinary field encompassing History, Anthropology, Psychology, Education for example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entrevista a Javier Santaolalla investigador en el experimento del CERN que describió recientemente una nueva partícula: el bosón de Higgs, que trata de explicar el origen de la masa de las partículas elementales. La existencia del bosón de Higgs y su campo asociado intentan explicar la razón de la existencia de masa en las partículas elementales. El 4 de julio de 2012, el CERN anunció la observación de una nueva partícula «consistente con el bosón de Higgs», pero se necesitaría más tiempo y datos para confirmarlo. El 14 de marzo de 2013 el CERN, con dos veces más datos de los que disponía en el anuncio del descubrimiento en julio de 2012, encontraron que la nueva partícula se ve cada vez más como el bosón de Higgs. Javier Santaolalla es ingeniero de telecomunicación por la Universidad de las Palmas de Gran Canaria y licenciado en ciencias físicas por la Universidad Complutense de Madrid. Investigador del CIEMAT (Madrid) y Dr. por esta última universidad en ciencias físicas. El Dr. Javier Santaolalla se encuentra unos días de escala en Las Palmas de Gran Canaria procedente de Brasil y con destino a Ginebra. Le agradecemos cuantas facilidades nos ha proporcionado en la realización de la presente entrevista así como sus aportaciones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is mainly about the search for exotic heavy particles -Intermediate Mass Magnetic Monopoles, Nuclearites and Q-balls with the SLIM experiment at the Chacaltaya High Altitude Laboratory (5230 m, Bolivia), establishing upper limits (90% CL) in the absence of candidates, which are among the best if not the only one for all three kind of particles. A preliminary study of the background induced by cosmic neutron in CR39 at the SLIM site, using Monte Carlo simulations. The measurement of the elemental abundance of the primary cosmic ray with the CAKE experiment on board of a stratospherical balloon; the charge distribution obtained spans in the range 5≤Z≤31. Both experiments were based on the use of plastic Nuclear Track Detectors, which records the passage of ionizing particles; by using some chemical reagents such passage can be make visible at optical microscopes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In den letzten fünf Jahren hat sich mit dem Begriff desspektralen Tripels eine Möglichkeit zur Beschreibungdes an Spinoren gekoppelten Gravitationsfeldes auf(euklidischen) nichtkommutativen Räumen etabliert. Die Dynamik dieses Gravitationsfeldes ist dabei durch diesogenannte spektrale Wirkung, dieSpur einer geeigneten Funktion des Dirac-Operators,bestimmt. Erstaunlicherweise kann man die vollständige Lagrange-Dichtedes (an das Gravitationsfeld gekoppelten) Standardmodellsder Elementarteilchenphysik, also insbesondere auch denmassegebenden Higgs-Sektor, als spektrale Wirkungeines entsprechenden spektralen Tripels ableiten. Diesesspektrale Tripel ist als Produkt des spektralenTripels der (kommutativen) Raumzeit mit einem speziellendiskreten spektralen Tripel gegeben. In der Arbeitwerden solche diskreten spektralen Tripel, die bis vorKurzem neben dem nichtkommutativen Torus die einzigen,bekannten nichtkommutativen Beispiele waren, klassifiziert. Damit kannnun auch untersucht werden, inwiefern sich dasStandardmodell durch diese Eigenschaft gegenüber anderenYang-Mills-Higgs-Theorien auszeichnet. Es zeigt sichallerdings, dasses - trotz mancher Einschränkung - eine sehr große Zahl vonModellen gibt, die mit Hilfe von spektralen Tripelnabgeleitet werden können. Es wäre aber auch denkbar, dass sich das spektrale Tripeldes Standardmodells durch zusätzliche Strukturen,zum Beispiel durch eine darauf ``isometrisch'' wirkendeHopf-Algebra, auszeichnet. In der Arbeit werden, um dieseFrage untersuchen zu können, sogenannte H-symmetrischespektrale Tripel, welche solche Hopf-Isometrien aufweisen,definiert.Dabei ergibt sich auch eine Möglichkeit, neue(H-symmetrische) spektrale Tripel mit Hilfe ihrerzusätzlichen Symmetrienzu konstruieren. Dieser Algorithmus wird an den Beispielender kommutativen Sphäre, deren Spin-Geometrie hier zumersten Mal vollständig in der globalen, algebraischen Sprache der NichtkommutativenGeometrie beschrieben wird, sowie dem nichtkommutativenTorus illustriert.Als Anwendung werden einige neue Beipiele konstruiert. Eswird gezeigt, dass sich für Yang-Mills Higgs-Theorien, diemit Hilfe von H-symmetrischen spektralen Tripeln abgeleitetwerden, aus den zusätzlichen Isometrien Einschränkungen andiefermionischen Massenmatrizen ergeben. Im letzten Abschnitt der Arbeit wird kurz auf dieQuantisierung der spektralen Wirkung für diskrete spektraleTripel eingegangen.Außerdem wird mit dem Begriff des spektralen Quadrupels einKonzept für die nichtkommutative Verallgemeinerungvon lorentzschen Spin-Mannigfaltigkeiten vorgestellt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Über viele Jahre hinweg wurden wieder und wieder Argumente angeführt, die diskreten Räumen gegenüber kontinuierlichen Räumen eine fundamentalere Rolle zusprechen. Unser Zugangzur diskreten Welt wird durch neuere Überlegungen der Nichtkommutativen Geometrie (NKG) bestimmt. Seit ca. 15Jahren gibt es Anstrengungen und auch Fortschritte, Physikmit Hilfe von Nichtkommutativer Geometrie besser zuverstehen. Nur eine von vielen Möglichkeiten ist dieReformulierung des Standardmodells derElementarteilchenphysik. Unter anderem gelingt es, auch denHiggs-Mechanismus geometrisch zu beschreiben. Das Higgs-Feld wird in der NKG in Form eines Zusammenhangs auf einer zweielementigen Menge beschrieben. In der Arbeit werden verschiedene Ziele erreicht:Quantisierung einer nulldimensionalen ,,Raum-Zeit'', konsistente Diskretisierungf'ur Modelle im nichtkommutativen Rahmen.Yang-Mills-Theorien auf einem Punkt mit deformiertemHiggs-Potenzial. Erweiterung auf eine ,,echte''Zwei-Punkte-Raum-Zeit, Abzählen von Feynman-Graphen in einer nulldimensionalen Theorie, Feynman-Regeln. Eine besondere Rolle werden Termini, die in derQuantenfeldtheorie ihren Ursprung haben, gewidmet. In diesemRahmen werden Begriffe frei von Komplikationen diskutiert,die durch etwaige Divergenzen oder Schwierigkeitentechnischer Natur verursacht werden könnten.Eichfixierungen, Geistbeiträge, Slavnov-Taylor-Identität undRenormierung. Iteratives Lösungsverfahren derDyson-Schwinger-Gleichung mit Computeralgebra-Unterstützung,die Renormierungsprozedur berücksichtigt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurden Daten aus Proton-Antiproton-Kollisionen analysiert, die im Zeitraum von April 2002 bis März 2004 mit dem DO-Detektor bei einer Schwerpunktsenergie von sqrt(s) = 1.96 TeV am Tevatron-Beschleuniger am Fermi National Accelerator Laboratory aufgezeichnet wurden. Je nach Analyse entsprechen die verwendeten Datensätze integrierten Luminositäten von 158-252 pb^-1. Die Ereignisse wurden auf die Existenz von Higgs-Bosonen und Charginos und Neutralinos untersucht. Außerdem wurde eine Messung des Wirkungsquerschnittes der Paarproduktion von W-Bosonen durchgeführt. Für die Suchen nach Higgs-Bosonen wurden leptonische Endzustände mit zwei Elektronen bzw. einem Elektron und einem Myon untersucht, wie sie bei Zerfällen von Higgs-Bosonen über zwei W-Bosonen erwartet werden. Wegen des momentan zur Verfügung stehenden geringen Datensatzes ist nur möglich, die Daten auf die Existenz von Higgs-Bosonen zu untersuchen, wie sie innerhalb alternativer Modelle vorhergesagt werden. Aufgrund größerer Produktionswirkungsquerschnitte werden die Higgs-Bosonen mit erhöhter Rate erzeugt und werden somit schon bei niedrigen integrierten Luminositäten zugänglich. Bei der durchgeführten Analyse wurde eine gute Übereinstimmung der beobachteten Ereignisse mit der Erwartung aus Prozessen des Standardmodells gefunden. Da keine Evidenz für die Existenz von Higgs-Bosonen beobachtet wurde, wurden die Ergebnisse verwendet, um mit einem Vertrauensniveau von 95% eine obere Grenze auf den Produktionswirkungsquerschnitt multipliziert mit dem Verzweigungsverhältnis anzugeben. Durch eine Kombination der ee-, emu- und mumu-Endzustände erhält man eine obere Grenze zwischen 5.7 und 40.3 pb im Higgs-Massenbereich von 100-200 GeV. Die Ergebnisse zeigen, daß auch der hier verwendete Datensatz noch zu klein ist, um auch Higgs-Bosonen im Rahmen der alternativen Modelle zu entdecken bzw. auszuschließen. Um die assoziierte Produktion von Charginos und Neutralinos nachzuweisen, wurden ebenfalls Endzustände mit einem Elektron und einem Myon untersucht. Zur Verbesserung der Sensitivität wurde eine Kombination mit Analysen in ee- und mumu-Endzuständen durchgeführt. Da auch hier eine gute Konsistenz mit der Erwartung der Untergrundprozesse innerhalb des Standardmodells gefunden wurde, wurden ebenso obere Grenzen auf den Produktionswirkungsquerschnitt multipliziert mit dem Verzweigungsverhältnis gesetzt. Die Ergebnisse werden im Rahmen des mSUGRA-Modells interpretiert. Es ergeben sich obere Grenzen zwischen 0.46 und 0.63 pb für Charginomassen im Bereich von 97 GeV bis 114 GeV. Dies stellt eine deutliche Verbesserung der bei früheren Messungen bei DO erhaltenen Ausschlußgrenzen dar. Allerdings ist es wiederum aufgrund des geringen Datensatzes nicht möglich, Punkte im mSUGRA-Parameterraum oberhalb der bei LEP gefundenen Grenzen auszuschließen. Die Ergebnisse können auch verwendet werden, um allgemeinere SUSY-Modelle einzuschränken, die ebenfalls entsprechende Beziehungen zwischen den Chargino- und Neutralino-Massen erfüllen. Den Hauptuntergrund bei diesen Suchen stellt die Paarproduktion von W-Bosonen dar. Es wurde zum ersten Mal im Rahmen des DO-Experimentes eine Messung des Wirkungsquerschnittes der W-Paarproduktion mit einer Signifikanz von mehr als 3 sigma durchgeführt. Es wird eine gute Übereinstimmung mit der Next-to-leading-order-Vorhersage der Theorie gefunden. Kombiniert man die ee-, emu- und mumu-Endzustände, ergibt sich ein Wirkungsquerschnitt von sigma(WW) = 13.35+4.65-3.99(stat) +0.77-1.05(syst) +-0.87(Lum) pb.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.