10 resultados para Optimal tests

em Universitätsbibliothek Kassel, Universität Kassel, Germany


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hintergrund und Ziel: Ausgehend von einem Forschungsdefizit im Bereich von Performance-Tests, das von der Arbeitsgruppe um Bührlen et al. (2002) gekennzeichnet wurde, war es das Ziel der Arbeit einen Performance-Tests of lower limb activities (Polla) zu validieren. Methode: In einer Längsschnittstudie wurden die Ergebnisse einer sechswöchigen physiotherapeutischen Behandlung an einem 19-75jährigem orthopädisch-traumatologisch orientierten Patientenkollektiv (n=81) mit dem Polla und dem SF-36 Fragebogen erfasst. Ergebnisse: Die Ergebnisse machen eine gute Absicherung der Teststatistik deutlich. Bei ausgezeichneter Intrarater- (n=29) sowie guter Interrater-Reliabilität (n=32) weist die Konsistenzanalyse eine zufrieden stellende Zuverlässigkeit auf. Die Kriteriumsvalidität macht moderate Zusammenhänge zwischen dem Polla und den Dimensionen Schmerz, Körperliche Rollenfunktion und Körperliche Funktionsfähigkeit des SF-36 deutlich. Über die Standardized Response Mean zeigen die Instrumente eine große Änderungssensitivität, die nur für den Polla auch zum Follow-up (n=26) gilt. Schlussfolgerung: Der Polla ist ein kostenloses Testverfahren mit hoher praktischer Relevanz, wobei aus zeitökonomischen Gründen eine modifizierte Form des Polla mit nur zehn Items und einem gemessenen Test zu empfehlen ist.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen von empirischen Untersuchungen zum Lehren und Lernen von Mathematik haben wir einen Test entwickelt ("Potentialtest"), der die "mathematische Leistungsfähigkeit" von 13/14jährigen Jugendlichen in England und Deutschland für Vergleichszwecke messen soll. Im vorliegenden Beitrag beschreiben wir die Entstehung des Tests sowie Resultate der Durchführung des Tests bei 1036 englischen und deutschen Lernenden. Die Resultate werden unter Berücksichtigung von - aus unseren früheren Fallstudien bekannten - Charakteristika des Mathematikunterrichts in beiden Ländern interpretiert.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In dieser Arbeit werden mithilfe der Likelihood-Tiefen, eingeführt von Mizera und Müller (2004), (ausreißer-)robuste Schätzfunktionen und Tests für den unbekannten Parameter einer stetigen Dichtefunktion entwickelt. Die entwickelten Verfahren werden dann auf drei verschiedene Verteilungen angewandt. Für eindimensionale Parameter wird die Likelihood-Tiefe eines Parameters im Datensatz als das Minimum aus dem Anteil der Daten, für die die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, und dem Anteil der Daten, für die diese Ableitung nicht positiv ist, berechnet. Damit hat der Parameter die größte Tiefe, für den beide Anzahlen gleich groß sind. Dieser wird zunächst als Schätzer gewählt, da die Likelihood-Tiefe ein Maß dafür sein soll, wie gut ein Parameter zum Datensatz passt. Asymptotisch hat der Parameter die größte Tiefe, für den die Wahrscheinlichkeit, dass für eine Beobachtung die Ableitung der Loglikelihood-Funktion nach dem Parameter nicht negativ ist, gleich einhalb ist. Wenn dies für den zu Grunde liegenden Parameter nicht der Fall ist, ist der Schätzer basierend auf der Likelihood-Tiefe verfälscht. In dieser Arbeit wird gezeigt, wie diese Verfälschung korrigiert werden kann sodass die korrigierten Schätzer konsistente Schätzungen bilden. Zur Entwicklung von Tests für den Parameter, wird die von Müller (2005) entwickelte Simplex Likelihood-Tiefe, die eine U-Statistik ist, benutzt. Es zeigt sich, dass für dieselben Verteilungen, für die die Likelihood-Tiefe verfälschte Schätzer liefert, die Simplex Likelihood-Tiefe eine unverfälschte U-Statistik ist. Damit ist insbesondere die asymptotische Verteilung bekannt und es lassen sich Tests für verschiedene Hypothesen formulieren. Die Verschiebung in der Tiefe führt aber für einige Hypothesen zu einer schlechten Güte des zugehörigen Tests. Es werden daher korrigierte Tests eingeführt und Voraussetzungen angegeben, unter denen diese dann konsistent sind. Die Arbeit besteht aus zwei Teilen. Im ersten Teil der Arbeit wird die allgemeine Theorie über die Schätzfunktionen und Tests dargestellt und zudem deren jeweiligen Konsistenz gezeigt. Im zweiten Teil wird die Theorie auf drei verschiedene Verteilungen angewandt: Die Weibull-Verteilung, die Gauß- und die Gumbel-Copula. Damit wird gezeigt, wie die Verfahren des ersten Teils genutzt werden können, um (robuste) konsistente Schätzfunktionen und Tests für den unbekannten Parameter der Verteilung herzuleiten. Insgesamt zeigt sich, dass für die drei Verteilungen mithilfe der Likelihood-Tiefen robuste Schätzfunktionen und Tests gefunden werden können. In unverfälschten Daten sind vorhandene Standardmethoden zum Teil überlegen, jedoch zeigt sich der Vorteil der neuen Methoden in kontaminierten Daten und Daten mit Ausreißern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main task of this work has been to investigate the effects of anisotropy onto the propagation of seismic waves along the Upper Mantle below Germany and adjacent areas. Refraction- and reflexion seismic experiments proved the existence of Upper Mantle anisotropy and its influence onto the propagation of Pn-waves. By the 3D tomographic investigations that have been done here for the crust and the upper mantle, considering the influence of anisotropy, a gap for the investigations in Europe has been closed. These investigations have been done with the SSH-Inversionprogram of Prof. Dr. M. Koch, which is able to compute simultaneously the seismic structure and hypocenters. For the investigation, a dataset has been available with recordings between the years 1975 to 2003 with a total of 60249 P- and 54212 S-phase records of 10028 seismic events. At the beginning, a precise analysis of the residuals (RES, the difference between calculated and observed arrivaltime) has been done which confirmed the existence of anisotropy for Pn-phases. The recognized sinusoidal distribution has been compensated by an extension of the SSH-program by an ellipse with a slow and rectangular fast axis with azimuth to correct the Pn-velocities. The azimuth of the fast axis has been fixed by the application of the simultaneous inversion at 25° - 27° with a variation of the velocities at +- 2.5 about an average value at 8 km/s. This new value differs from the old one at 35°, recognized in the initial residual analysis. This depends on the new computed hypocenters together with the structure. The application of the elliptical correction has resulted in a better fit of the vertical layered 1D-Model, compared to the results of preceding seismological experiments and 1D and 2D investigations. The optimal result of the 1D-inversion has been used as initial starting model for the 3D-inversions to compute the three dimensional picture of the seismic structure of the Crust and Upper Mantle. The simultaneous inversion has showed an optimization of the relocalization of the hypocenters and the reconstruction of the seismic structure in comparison to the geology and tectonic, as described by other investigations. The investigations for the seismic structure and the relocalization have been confirmed by several different tests. First, synthetic traveltime data are computed with an anisotropic variation and inverted with and without anisotropic correction. Further, tests with randomly disturbed hypocenters and traveltime data have been proceeded to verify the influence of the initial values onto the relocalization accuracy and onto the seismic structure and to test for a further improvement by the application of the anisotropic correction. Finally, the results of the work have been applied onto the Waldkirch earthquake in 2004 to compare the isotropic and the anisotropic relocalization with the initial optimal one to verify whether there is some improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that optimizing a quantum gate for an open quantum system requires the time evolution of only three states irrespective of the dimension of Hilbert space. This represents a significant reduction in computational resources compared to the complete basis of Liouville space that is commonly believed necessary for this task. The reduction is based on two observations: the target is not a general dynamical map but a unitary operation; and the time evolution of two properly chosen states is sufficient to distinguish any two unitaries. We illustrate gate optimization employing a reduced set of states for a controlled phasegate with trapped atoms as qubit carriers and a iSWAP gate with superconducting qubits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.