952 resultados para Vectorial diagram and phasorial diagram
Resumo:
Digitales stochastisches Magnetfeld-Sensorarray Stefan Rohrer Im Rahmen eines mehrjährigen Forschungsprojektes, gefördert von der Deutschen Forschungsgesellschaft (DFG), wurden am Institut für Mikroelektronik (IPM) der Universität Kassel digitale Magnetfeldsensoren mit einer Breite bis zu 1 µm entwickelt. Die vorliegende Dissertation stellt ein aus diesem Forschungsprojekt entstandenes Magnetfeld-Sensorarray vor, das speziell dazu entworfen wurde, um digitale Magnetfelder schnell und auf minimaler Fläche mit einer guten räumlichen und zeitlichen Auflösung zu detektieren. Der noch in einem 1,0µm-CMOS-Prozess gefertigte Test-Chip arbeitet bis zu einer Taktfrequenz von 27 MHz bei einem Sensorabstand von 6,75 µm. Damit ist er das derzeit kleinste und schnellste digitale Magnetfeld-Sensorarray in einem Standard-CMOS-Prozess. Konvertiert auf eine 0,09µm-Technologie können Frequenzen bis 1 GHz erreicht werden bei einem Sensorabstand von unter 1 µm. In der Dissertation werden die wichtigsten Ergebnisse des Projekts detailliert beschrieben. Basis des Sensors ist eine rückgekoppelte Inverter-Anordnung. Als magnetfeldsensitives Element dient ein auf dem Hall-Effekt basierender Doppel-Drain-MAGFET, der das Verhalten der Kippschaltung beeinflusst. Aus den digitalen Ausgangsdaten kann die Stärke und die Polarität des Magnetfelds bestimmt werden. Die Gesamtanordnung bildet einen stochastischen Magnetfeld-Sensor. In der Arbeit wird ein Modell für das Kippverhalten der rückgekoppelten Inverter präsentiert. Die Rauscheinflüsse des Sensors werden analysiert und in einem stochastischen Differentialgleichungssystem modelliert. Die Lösung der stochastischen Differentialgleichung zeigt die Entwicklung der Wahrscheinlichkeitsverteilung des Ausgangssignals über die Zeit und welche Einflussfaktoren die Fehlerwahrscheinlichkeit des Sensors beeinflussen. Sie gibt Hinweise darauf, welche Parameter für das Design und Layout eines stochastischen Sensors zu einem optimalen Ergebnis führen. Die auf den theoretischen Berechnungen basierenden Schaltungen und Layout-Komponenten eines digitalen stochastischen Sensors werden in der Arbeit vorgestellt. Aufgrund der technologisch bedingten Prozesstoleranzen ist für jeden Detektor eine eigene kompensierende Kalibrierung erforderlich. Unterschiedliche Realisierungen dafür werden präsentiert und bewertet. Zur genaueren Modellierung wird ein SPICE-Modell aufgestellt und damit für das Kippverhalten des Sensors eine stochastische Differentialgleichung mit SPICE-bestimmten Koeffizienten hergeleitet. Gegenüber den Standard-Magnetfeldsensoren bietet die stochastische digitale Auswertung den Vorteil einer flexiblen Messung. Man kann wählen zwischen schnellen Messungen bei reduzierter Genauigkeit und einer hohen lokalen Auflösung oder einer hohen Genauigkeit bei der Auswertung langsam veränderlicher Magnetfelder im Bereich von unter 1 mT. Die Arbeit präsentiert die Messergebnisse des Testchips. Die gemessene Empfindlichkeit und die Fehlerwahrscheinlichkeit sowie die optimalen Arbeitspunkte und die Kennliniencharakteristik werden dargestellt. Die relative Empfindlichkeit der MAGFETs beträgt 0,0075/T. Die damit erzielbaren Fehlerwahrscheinlichkeiten werden in der Arbeit aufgelistet. Verglichen mit dem theoretischen Modell zeigt das gemessene Kippverhalten der stochastischen Sensoren eine gute Übereinstimmung. Verschiedene Messungen von analogen und digitalen Magnetfeldern bestätigen die Anwendbarkeit des Sensors für schnelle Magnetfeldmessungen bis 27 MHz auch bei kleinen Magnetfeldern unter 1 mT. Die Messungen der Sensorcharakteristik in Abhängigkeit von der Temperatur zeigen, dass die Empfindlichkeit bei sehr tiefen Temperaturen deutlich steigt aufgrund der Abnahme des Rauschens. Eine Zusammenfassung und ein ausführliches Literaturverzeichnis geben einen Überblick über den Stand der Technik.
Resumo:
Using a relativistic selfconsistent correlation diagram a first interpretation of the shape and position of L MO X-rays is given within a quasi-adiabatic model.
Resumo:
The result of the first calculation of a self-consistent relativistic many electron correlation diagram ever done (for the system Au - I) leads to a good agreement of the spectral shape and position of the observed noncharacteristic X-rays within the quasi adiabatic model.
Resumo:
In the collision system Xe - Ag, the thresholds for excitation of quasimolecular L radiation and characteristic Ag L radiation have been found to lie at about 5 MeV and 1 MeV, respectively. These results are discussed on the basis of ab initio calculations of the screened interaction potential and the electron-correlation diagram.
Resumo:
The quasimolecular M radiation emitted in collisions between Xe ions of up to 6 MeV energy and solid targets of Ta, Au, Pb and Bi, as well as a gaseous target of Pb(CH_3)_4, has been studied. Using a realistic theoretical correlation diagram, a semiquantitative explanation of the observed peak structure is given.
Resumo:
Femtosecond time-resolved techniques with KETOF (kinetic energy time-of-flight) detection in a molecular beam are developed for studies of the vectorial dynamics of transition states. Application to the dissociation reaction of IHgI is presented. For this system, the complex [I---Hg---I](++)* is unstable and, through the symmetric and asymmetric stretch motions, yields different product fragments: [I---Hg---I](++)* -> HgI(X^2/sigma^+) + I(^2P_3/2) [or I*(^2P_l/2)] (1a); [I---Hg---I](++)* -> Hg(^1S_0) + I(^2P_3/2) + I(^2P_3/2) [or I* (^2P_1/2)] (1 b). These two channels, (1a) and (1b), lead to different kinetic energy distributions in the products. It is shown that the motion of the wave packet in the transition-state region can be observed by MPI mass detection; the transient time ranges from 120 to 300 fs depending on the available energy. With polarized pulses, the vectorial properties (transition moments alignment relative to recoil direction) are studied for fragment separations on the femtosecond time scale. The results indicate the nature of the structure (symmetry properties) and the correlation to final products. For 311-nm excitation, no evidence of crossing between the I and I* potentials is found at the internuclear separations studied. (Results for 287-nm excitation are also presented.) Molecular dynamics simulations and studies by laser-induced fluorescence support these findings.
Resumo:
Concept lattices are used in formal concept analysis to represent data conceptually so that the original data are still recognizable. Their line diagrams should reflect the semantical relationships within the data. Up to now, no satisfactory automatic drawing programs for this task exist. The geometrical heuristic is the most successful tool for drawing concept lattices manually. It ueses a geometric representation as intermediate step between the list of upper covers and the line diagram of the lattice.
Resumo:
This thesis aims at empowering software customers with a tool to build software tests them selves, based on a gradual refinement of natural language scenarios into executable visual test models. The process is divided in five steps: 1. First, a natural language parser is used to extract a graph of grammatical relations from the textual scenario descriptions. 2. The resulting graph is transformed into an informal story pattern by interpreting structurization rules based on Fujaba Story Diagrams. 3. While the informal story pattern can already be used by humans the diagram still lacks technical details, especially type information. To add them, a recommender based framework uses web sites and other resources to generate formalization rules. 4. As a preparation for the code generation the classes derived for formal story patterns are aligned across all story steps, substituting a class diagram. 5. Finally, a headless version of Fujaba is used to generate an executable JUnit test. The graph transformations used in the browser application are specified in a textual domain specific language and visualized as story pattern. Last but not least, only the heavyweight parsing (step 1) and code generation (step 5) are executed on the server side. All graph transformation steps (2, 3 and 4) are executed in the browser by an interpreter written in JavaScript/GWT. This result paves the way for online collaboration between global teams of software customers, IT business analysts and software developers.
Resumo:
The use of orthonormal coordinates in the simplex and, particularly, balance coordinates, has suggested the use of a dendrogram for the exploratory analysis of compositional data. The dendrogram is based on a sequential binary partition of a compositional vector into groups of parts. At each step of a partition, one group of parts is divided into two new groups, and a balancing axis in the simplex between both groups is defined. The set of balancing axes constitutes an orthonormal basis, and the projections of the sample on them are orthogonal coordinates. They can be represented in a dendrogram-like graph showing: (a) the way of grouping parts of the compositional vector; (b) the explanatory role of each subcomposition generated in the partition process; (c) the decomposition of the total variance into balance components associated with each binary partition; (d) a box-plot of each balance. This representation is useful to help the interpretation of balance coordinates; to identify which are the most explanatory coordinates; and to describe the whole sample in a single diagram independently of the number of parts of the sample
Resumo:
Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing models between end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework. We present here a possible solution based on factor analysis of compositions illustrated with a case study. We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables that lay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hidden components, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members. We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained total variance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphical representation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysis of diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, except fertilisers due to the heterogeneity of their composition. This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations are intrinsic to the relative nature of compositional data
Resumo:
Evolution of compositions in time, space, temperature or other covariates is frequent in practice. For instance, the radioactive decomposition of a sample changes its composition with time. Some of the involved isotopes decompose into other isotopes of the sample, thus producing a transfer of mass from some components to other ones, but preserving the total mass present in the system. This evolution is traditionally modelled as a system of ordinary di erential equations of the mass of each component. However, this kind of evolution can be decomposed into a compositional change, expressed in terms of simplicial derivatives, and a mass evolution (constant in this example). A rst result is that the simplicial system of di erential equations is non-linear, despite of some subcompositions behaving linearly. The goal is to study the characteristics of such simplicial systems of di erential equa- tions such as linearity and stability. This is performed extracting the compositional dif ferential equations from the mass equations. Then, simplicial derivatives are expressed in coordinates of the simplex, thus reducing the problem to the standard theory of systems of di erential equations, including stability. The characterisation of stability of these non-linear systems relays on the linearisation of the system of di erential equations at the stationary point, if any. The eigenvelues of the linearised matrix and the associated behaviour of the orbits are the main tools. For a three component system, these orbits can be plotted both in coordinates of the simplex or in a ternary diagram. A characterisation of processes with transfer of mass in closed systems in terms of stability is thus concluded. Two examples are presented for illustration, one of them is a radioactive decay
Resumo:
Exam questions and solutions in LaTex
Resumo:
Exam questions and solutions in PDF
Resumo:
En este trabajo se analiza el impacto en la cartera bruta del sector nanciero colombiano ante un choque en las variables macroeconómicas ó viceversa. Lo anterior se realizó a través de un modelo de correción de errores vectorial (VECM). Inicialmente, los resultados señalan que existe una relación de causalidad y de largo plazo entre la cartera neta real del sector nanciero, el Producto Interno Bruto, la DTF Real y el Índice de la Tasa de Cambio Real. Finalmente, se encontró que los impulsos respuesta de las variables mencionadas están acordes con la teoría económica y los hechos estilizados.
Resumo:
El pensamiento sistémico es una manera de interpretar y comprender los fenómenos, que difiere de la forma convencional denominada reduccionista en la que no se realiza la comprensión a través de la descomposición de las partes, sino que se realiza haciendo énfasis en la comprensión del sistema como un todo y en las interrelaciones que se desprenden del sistema; por tanto la interpretación no se da a partir de un análisis de causa efecto, sino una comprensión del sistema dentro del contexto de un todo superior. Esta investigación se realiza aplicando el pensamiento sistémico en un caso práctico de una organización como es el Hospital Engativá, se hace la interpretación de la organización desde el punto de vista sistémico, realizando un diagrama causal que permite leer la organización desde este punto de vista. Se desarrolló el modelo en una herramienta para dinámica de sistemas y se limita el diseño y la simulación al Proceso de Cartera – Gestión Cobro y Recaudo, realizando una lectura e interpretación de los resultados y hallazgos arrojados por el modelo. Por último se concluye que es factible dirigir una organización desde el pensamiento sistémico y que mejora la toma de decisiones.