958 resultados para semi-classical analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rockfall is a widespread and hazardous process in mountain environments, but data on past events are only rarely available. Growth-ring series from trees impacted by rockfall were successfully used in the past to overcome the lack of archival records. Dendrogeomorphic techniques have been demonstrated to allow very accurate dating and reconstruction of spatial and temporal rockfall activity, but the approach has been cited to be labor intensive and time consuming. In this study, we present a simplified method to quantify rockfall processes on forested slopes requiring less time and efforts. The approach is based on a counting of visible scars on the stem surface of Common beech (Fagus sylvatica L.). Data are presented from a site in the Inn valley (Austria), where rocks are frequently detached from an ~ 200-m-high, south-facing limestone cliff. We compare results obtained from (i) the “classical” analysis of growth disturbances in the tree-ring series of 33 Norway spruces (Picea abies (L.) Karst.) and (ii) data obtained with a scar count on the stem surface of 50 F. sylvatica trees. A total of 277 rockfall events since A.D. 1819 could be reconstructed from tree-ring records of P. abies, whereas 1140 scars were observed on the stem surface of F. sylvatica. Absolute numbers of rockfalls (and hence return intervals) vary significantly between the approaches, and the mean number of rockfalls observed on the stem surface of F. sylvatica exceeds that of P. abies by a factor of 2.7. On the other hand, both methods yield comparable data on the spatial distribution of relative rockfall activity. Differences may be explained by a great portion of masked scars in P. abies and the conservation of signs of impacts on the stem of F. sylvatica. Besides, data indicate that several scars on the bark of F. sylvatica may stem from the same impact and thus lead to an overestimation of rockfall activity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In epidemiology literature, it is often required to investigate the relationships between means where the levels of experiment are actually monotone sets forming a partition on the range of sampling values. With this need, the analysis of these group means is generally performed using classical analysis of variance (ANOVA). However, this method has never been challenged. In this dissertation, we will formulate and present our examination of its validity. First, the classical assumptions of normality and constant variance are not always true. Second, under the null hypothesis of equal means, the test statistic for the classical ANOVA technique is still valid. Third, when the hypothesis of equal means is rejected, the classical analysis techniques for hypotheses of contrasts are not valid. Fourth, under the alternative hypothesis, we can show that the monotone property of levels leads to the conclusion that the means are monotone. Fifth, we propose an appropriate method for handing the data in this situation. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Resumen El diseño clásico de circuitos de microondas se basa fundamentalmente en el uso de los parámetros s, debido a su capacidad para caracterizar de forma exitosa el comportamiento de cualquier circuito lineal. La relación existente entre los parámetros s con los sistemas de medida actuales y con las herramientas de simulación lineal han facilitado su éxito y su uso extensivo tanto en el diseño como en la caracterización de circuitos y subsistemas de microondas. Sin embargo, a pesar de la gran aceptación de los parámetros s en la comunidad de microondas, el principal inconveniente de esta formulación reside en su limitación para predecir el comportamiento de sistemas no lineales reales. En la actualidad, uno de los principales retos de los diseñadores de microondas es el desarrollo de un contexto análogo que permita integrar tanto el modelado no lineal, como los sistemas de medidas de gran señal y los entornos de simulación no lineal, con el objetivo de extender las capacidades de los parámetros s a regímenes de operación en gran señal y por tanto, obtener una infraestructura que permita tanto la caracterización como el diseño de circuitos no lineales de forma fiable y eficiente. De acuerdo a esta filosofía, en los últimos años se han desarrollado diferentes propuestas como los parámetros X, de Agilent Technologies, o el modelo de Cardiff que tratan de proporcionar esta plataforma común en el ámbito de gran señal. Dentro de este contexto, uno de los objetivos de la presente Tesis es el análisis de la viabilidad del uso de los parámetros X en el diseño y simulación de osciladores para transceptores de microondas. Otro aspecto relevante en el análisis y diseño de circuitos lineales de microondas es la disposición de métodos analíticos sencillos, basados en los parámetros s del transistor, que permitan la obtención directa y rápida de las impedancias de carga y fuente necesarias para cumplir las especificaciones de diseño requeridas en cuanto a ganancia, potencia de salida, eficiencia o adaptación de entrada y salida, así como la determinación analítica de parámetros de diseño clave como el factor de estabilidad o los contornos de ganancia de potencia. Por lo tanto, el desarrollo de una formulación de diseño analítico, basada en los parámetros X y similar a la existente en pequeña señal, permitiría su uso en aplicaciones no lineales y supone un nuevo reto que se va a afrontar en este trabajo. Por tanto, el principal objetivo de la presente Tesis consistiría en la elaboración de una metodología analítica basada en el uso de los parámetros X para el diseño de circuitos no lineales que jugaría un papel similar al que juegan los parámetros s en el diseño de circuitos lineales de microondas. Dichos métodos de diseño analíticos permitirían una mejora significativa en los actuales procedimientos de diseño disponibles en gran señal, así como una reducción considerable en el tiempo de diseño, lo que permitiría la obtención de técnicas mucho más eficientes. Abstract In linear world, classical microwave circuit design relies on the s-parameters due to its capability to successfully characterize the behavior of any linear circuit. Thus the direct use of s-parameters in measurement systems and in linear simulation analysis tools, has facilitated its extensive use and success in the design and characterization of microwave circuits and subsystems. Nevertheless, despite the great success of s-parameters in the microwave community, the main drawback of this formulation is its limitation in the behavior prediction of real non-linear systems. Nowadays, the challenge of microwave designers is the development of an analogue framework that allows to integrate non-linear modeling, large-signal measurement hardware and non-linear simulation environment in order to extend s-parameters capabilities to non-linear regimen and thus, provide the infrastructure for non-linear design and test in a reliable and efficient way. Recently, different attempts with the aim to provide this common platform have been introduced, as the Cardiff approach and the Agilent X-parameters. Hence, this Thesis aims to demonstrate the X-parameter capability to provide this non-linear design and test framework in CAD-based oscillator context. Furthermore, the classical analysis and design of linear microwave transistorbased circuits is based on the development of simple analytical approaches, involving the transistor s-parameters, that are able to quickly provide an analytical solution for the input/output transistor loading conditions as well as analytically determine fundamental parameters as the stability factor, the power gain contours or the input/ output match. Hence, the development of similar analytical design tools that are able to extend s-parameters capabilities in small-signal design to non-linear ap- v plications means a new challenge that is going to be faced in the present work. Therefore, the development of an analytical design framework, based on loadindependent X-parameters, constitutes the core of this Thesis. These analytical nonlinear design approaches would enable to significantly improve current large-signal design processes as well as dramatically decrease the required design time and thus, obtain more efficient approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the limits and potentials of European citizenship as a transnational form of social integration, taking as comparison Marshall's classical analysis of the historical development of social rights in the context of the national Welfare State. It is submitted that this potential is currently frustrated by the prevailing negative-integration dimension in which the interplay between Union citizenship and national systems of Welfare State takes place. This negative dimension pervades the entire case law of the Court of Justice on Union citizenship, even becoming dominant – after the famous Viking and Laval judgements – in the ways in which the judges in Luxembourg have built, and limited, what in Marshall’s terms might be called the European collective dimension of “industrial citizenship”. The new architecture of the economic and monetary governance of the Union, based as it is on an unprecedented effort towards a creeping constitutionalisation of a neo-liberal politics of austerity and welfare retrenchment, is destined to strengthen the de-structuring pressures on the industrial-relation and social protection systems of the member States. The conclusions sum-up the main critical arguments and make some suggestions for an alternative path for re-politicising the social question in Europe.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many studies on birds focus on the collection of data through an experimental design, suitable for investigation in a classical analysis of variance (ANOVA) framework. Although many findings are confirmed by one or more experts, expert information is rarely used in conjunction with the survey data to enhance the explanatory and predictive power of the model. We explore this neglected aspect of ecological modelling through a study on Australian woodland birds, focusing on the potential impact of different intensities of commercial cattle grazing on bird density in woodland habitat. We examine a number of Bayesian hierarchical random effects models, which cater for overdispersion and a high frequency of zeros in the data using WinBUGS and explore the variation between and within different grazing regimes and species. The impact and value of expert information is investigated through the inclusion of priors that reflect the experience of 20 experts in the field of bird responses to disturbance. Results indicate that expert information moderates the survey data, especially in situations where there are little or no data. When experts agreed, credible intervals for predictions were tightened considerably. When experts failed to agree, results were similar to those evaluated in the absence of expert information. Overall, we found that without expert opinion our knowledge was quite weak. The fact that the survey data is quite consistent, in general, with expert opinion shows that we do know something about birds and grazing and we could learn a lot faster if we used this approach more in ecology, where data are scarce. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adsorption of argon at its boiling point infinite cylindrical pores is considered by means of the non-local density functional theory (NLDFT) with a reference to MCM-41 silica. The NLDFT was adjusted to amorphous solids, which allowed us to quantitatively describe argon adsorption isotherm on nonporous reference silica in the entire bulk pressure range. In contrast to the conventional NLDFT technique, application of the model to cylindrical pores does not show any layering before the phase transition in conformity with experimental data. The finite pore is modeled as a cylindrical cavity bounded from its mouth by an infinite flat surface perpendicular to the pore axis. The adsorption of argon in pores of 4 and 5 nm diameters is analyzed in canonical and grand canonical ensembles using a two-dimensional version of NLDFT, which accounts for the radial and longitudinal fluid density distributions. The simulation results did not show any unusual features associated with accounting for the outer surface and support the conclusions obtained from the classical analysis of capillary condensation and evaporation. That is, the spontaneous condensation occurs at the vapor-like spinodal point, which is the upper limit of mechanical stability of the liquid-like film wetting the pore wall, while the evaporation occurs via a mechanism of receding of the semispherical meniscus from the pore mouth and the complete evaporation of the core occurs at the equilibrium transition pressure. Visualization of the pore filling and empting in the form of contour lines is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the absence of an external frame of reference-i.e., in background independent theories such as general relativity-physical degrees of freedom must describe relations between systems. Using a simple model, we investigate how such a relational quantum theory naturally arises by promoting reference systems to the status of dynamical entities. Our goal is twofold. First, we demonstrate using elementary quantum theory how any quantum mechanical experiment admits a purely relational description at a fundamental. Second, we describe how the original non-relational theory approximately emerges from the fully relational theory when reference systems become semi-classical. Our technique is motivated by a Bayesian approach to quantum mechanics, and relies on the noiseless subsystem method of quantum information science used to protect quantum states against undesired noise. The relational theory naturally predicts a fundamental decoherence mechanism, so an arrow of time emerges from a time-symmetric theory. Moreover, our model circumvents the problem of the collapse of the wave packet as the probability interpretation is only ever applied to diagonal density operators. Finally, the physical states of the relational theory can be described in terms of spin networks introduced by Penrose as a combinatorial description of geometry, and widely studied in the loop formulation of quantum gravity. Thus, our simple bottom-up approach (starting from the semiclassical limit to derive the fully relational quantum theory) may offer interesting insights on the low energy limit of quantum gravity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The focus of our work is the verification of tight functional properties of numerical programs, such as showing that a floating-point implementation of Riemann integration computes a close approximation of the exact integral. Programmers and engineers writing such programs will benefit from verification tools that support an expressive specification language and that are highly automated. Our work provides a new method for verification of numerical software, supporting a substantially more expressive language for specifications than other publicly available automated tools. The additional expressivity in the specification language is provided by two constructs. First, the specification can feature inclusions between interval arithmetic expressions. Second, the integral operator from classical analysis can be used in the specifications, where the integration bounds can be arbitrary expressions over real variables. To support our claim of expressivity, we outline the verification of four example programs, including the integration example mentioned earlier. A key component of our method is an algorithm for proving numerical theorems. This algorithm is based on automatic polynomial approximation of non-linear real and real-interval functions defined by expressions. The PolyPaver tool is our implementation of the algorithm and its source code is publicly available. In this paper we report on experiments using PolyPaver that indicate that the additional expressivity does not come at a performance cost when comparing with other publicly available state-of-the-art provers. We also include a scalability study that explores the limits of PolyPaver in proving tight functional specifications of progressively larger randomly generated programs. © 2014 Springer International Publishing Switzerland.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

∗ Partially supported by Grant MM-428/94 of MESC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: Recent studies have documented a link between axial myopia and ciliary muscle morphology; yet, the variation in biometric characteristics of the emmetropic ciliary muscle are not fully known. Ciliary muscle morphology, including symmetry, was investigated between both eyes of emmetropic participants and correlated to ocular biometric parameters. Methods: Anterior segment optical coherence tomography (Zeiss, Visante) was utilised to image both eyes of 49 emmetropic participants (mean spherical equivalent refractive error (MSE) ≥ -0.55; < +0.75 D), aged 19 to 26 years. High resolution images were obtained of nasal and temporal aspects of the ciliary muscle in the relaxed state. MSE of both eyes was recorded using the Grand Seiko WAM 5500; axial length (AXL), anterior chamber depth (ACD) and lens thickness (LT) of the right eye were obtained using the Haag-streit Lenstar LS 900 biometer. A bespoke semi-objective analysis programme was used to measure a range of ciliary muscle parameters. Results: Temporal ciliary muscle overall length (CML) was greater than nasal CML, in both eyes (right: 3.58 ± 0.40 mm and 3.85 ± 0.39 mm for nasal and temporal aspects, respectively, P < 0.001; left: 3.65 ± 0.35 mm and 3.88 ± 0.41 mm for nasal and temporal aspects, respectively, P < 0.001). Temporal ciliary muscle thickness (CMT) was greater than nasal CMT at 2 mm and 3 mm from the scleral spur (CM2 and CM3, respectively) in each eye (right CM2: 0.29 ± 0.05 mm and 0.32 ± 0.05 mm for nasal and temporal aspects, respectively, P < 0.001; left CM2: 0.30 ± 0.05 mm and 0.32 ± 0.05 mm for nasal and temporal aspects, respectively, P < 0.001; right CM3: 0.13 ± 0.05 mm and 0.16 ± 0.04 mm for nasal and temporal aspects, respectively, P < 0.001; left CM3: 0.14 ± 0.04 mm and 0.17 ± 0.05 mm for nasal and temporal aspects, respectively, P < 0.001). AXL was positively correlated with ciliary muscle anterior length (AL) (e.g. P < 0.001, r2 = 0.262 for left temporal aspect), CML (P = 0.003, r2 = 0.175 for right nasal aspect) and ACD (P = 0.01, r2 = 0.181). Conclusions: Morphological characteristics of the ciliary muscle in emmetropic eyes display high levels of symmetry between the eyes. Greater CML and AL are linked to greater AXL and ACD, indicating ciliary muscle growth with normal ocular development.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bulk gallium nitride (GaN) power semiconductor devices are gaining significant interest in recent years, creating the need for technology computer aided design (TCAD) simulation to accurately model and optimize these devices. This paper comprehensively reviews and compares different GaN physical models and model parameters in the literature, and discusses the appropriate selection of these models and parameters for TCAD simulation. 2-D drift-diffusion semi-classical simulation is carried out for 2.6 kV and 3.7 kV bulk GaN vertical PN diodes. The simulated forward current-voltage and reverse breakdown characteristics are in good agreement with the measurement data even over a wide temperature range.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the implication of the nonlinear and non-local multi-particle Schrodinger-Newton equation for the motion of the mass centre of an extended multi-particle object, giving self-contained and comprehensible derivations. In particular, we discuss two opposite limiting cases. In the first case, the width of the centre-of-mass wave packet is assumed much larger than the actual extent of the object, in the second case it is assumed much smaller. Both cases result in nonlinear deviations from ordinary free Schrodinger evolution for the centre of mass. On a general conceptual level we include some discussion in order to clarify the physical basis and intention for studying the Schrodinger-Newton equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The square root velocity framework is a method in shape analysis to define a distance between curves and functional data. Identifying two curves, if the differ by a reparametrization leads to the quotient space of unparametrized curves. In this paper we study analytical and topological aspects of this construction for the class of absolutely continuous curves. We show that the square root velocity transform is a homeomorphism and that the action of the reparametrization semigroup is continuous. We also show that given two $C^1$-curves, there exist optimal reparametrizations realising the minimal distance between the unparametrized curves represented by them.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part 10: Sustainability and Trust

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.