962 resultados para Schwinger Variational Principle


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numerous works have been conducted on modelling basic compliant elements such as wire beams, and closed-form analytical models of most basic compliant elements have been well developed. However, the modelling of complex compliant mechanisms is still a challenging work. This paper proposes a constraint-force-based (CFB) modelling approach to model compliant mechanisms with a particular emphasis on modelling complex compliant mechanisms. The proposed CFB modelling approach can be regarded as an improved free-body- diagram (FBD) based modelling approach, and can be extended to a development of the screw-theory-based design approach. A compliant mechanism can be decomposed into rigid stages and compliant modules. A compliant module can offer elastic forces due to its deformation. Such elastic forces are regarded as variable constraint forces in the CFB modelling approach. Additionally, the CFB modelling approach defines external forces applied on a compliant mechanism as constant constraint forces. If a compliant mechanism is at static equilibrium, all the rigid stages are also at static equilibrium under the influence of the variable and constant constraint forces. Therefore, the constraint force equilibrium equations for all the rigid stages can be obtained, and the analytical model of the compliant mechanism can be derived based on the constraint force equilibrium equations. The CFB modelling approach can model a compliant mechanism linearly and nonlinearly, can obtain displacements of any points of the rigid stages, and allows external forces to be exerted on any positions of the rigid stages. Compared with the FBD based modelling approach, the CFB modelling approach does not need to identify the possible deformed configuration of a complex compliant mechanism to obtain the geometric compatibility conditions and the force equilibrium equations. Additionally, the mathematical expressions in the CFB approach have an easily understood physical meaning. Using the CFB modelling approach, the variable constraint forces of three compliant modules, a wire beam, a four-beam compliant module and an eight-beam compliant module, have been derived in this paper. Based on these variable constraint forces, the linear and non-linear models of a decoupled XYZ compliant parallel mechanism are derived, and verified by FEA simulations and experimental tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work examines independence in the Canadian justice system using an approach adapted from new legal realist scholarship called ‘dynamic realism’. This approach proposes that issues in law must be considered in relation to their recursive and simultaneous development with historic, social and political events. Such events describe ‘law in action’ and more holistically demonstrate principles like independence, rule of law and access to justice. My dynamic realist analysis of independence in the justice system employs a range methodological tools and approaches from the social sciences, including: historical and historiographical study; public administrative; policy and institutional analysis; an empirical component; as well as constitutional, statutory interpretation and jurisprudential analysis. In my view, principles like independence represent aspirational ideals in law which can be better understood by examining how they manifest in legal culture and in the legal system. This examination focuses on the principle and practice of independence for both lawyers and judges in the justice system, but highlights the independence of the Bar. It considers the inter-relation between lawyer independence and the ongoing refinement of judicial independence in Canadian law. It also considers both independence of the Bar and the Judiciary in the context of the administration of justice, and practically illustrates the interaction between these principles through a case study of a specific aspect of the court system. This work also focuses on recent developments in the principle of Bar independence and its relation to an emerging school of professionalism scholarship in Canada. The work concludes by describing the principle of independence as both conditional and dynamic, but rooted in a unitary concept for both lawyers and judges. In short, independence can be defined as impartiality, neutrality and autonomy of legal decision-makers in the justice system to apply, protect and improve the law for what has become its primary normative purpose: facilitating access to justice. While both independence of the Bar and the Judiciary are required to support access to independent courts, some recent developments suggest the practical interactions between independence and access need to be the subject of further research, to better account for both the principles and the practicalities of the Canadian justice system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative methods can help us understand how underlying attributes contribute to movement patterns. Applying principal components analysis (PCA) to whole-body motion data may provide an objective data-driven method to identify unique and statistically important movement patterns. Therefore, the primary purpose of this study was to determine if athletes’ movement patterns can be differentiated based on skill level or sport played using PCA. Motion capture data from 542 athletes performing three sport-screening movements (i.e. bird-dog, drop jump, T-balance) were analyzed. A PCA-based pattern recognition technique was used to analyze the data. Prior to analyzing the effects of skill level or sport on movement patterns, methodological considerations related to motion analysis reference coordinate system were assessed. All analyses were addressed as case-studies. For the first case study, referencing motion data to a global (lab-based) coordinate system compared to a local (segment-based) coordinate system affected the ability to interpret important movement features. Furthermore, for the second case study, where the interpretability of PCs was assessed when data were referenced to a stationary versus a moving segment-based coordinate system, PCs were more interpretable when data were referenced to a stationary coordinate system for both the bird-dog and T-balance task. As a result of the findings from case study 1 and 2, only stationary segment-based coordinate systems were used in cases 3 and 4. During the bird-dog task, elite athletes had significantly lower scores compared to recreational athletes for principal component (PC) 1. For the T-balance movement, elite athletes had significantly lower scores compared to recreational athletes for PC 2. In both analyses the lower scores in elite athletes represented a greater range of motion. Finally, case study 4 reported differences in athletes’ movement patterns who competed in different sports, and significant differences in technique were detected during the bird-dog task. Through these case studies, this thesis highlights the feasibility of applying PCA as a movement pattern recognition technique in athletes. Future research can build on this proof-of-principle work to develop robust quantitative methods to help us better understand how underlying attributes (e.g. height, sex, ability, injury history, training type) contribute to performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Legal certainty, a feature of the rule of law, constitutes a requirement for the operational necessities of market interactions. But, the compatibility of the principle of legal certainty with ideals such as liberalism and free market economy must not lead to the hastened conclusion that therefore the principle of legal certainty would be compatible and tantamount to the principle of economic efficiency. We intend to analyse the efficiency rationale of an important general principle of EU law—the principle of legal certainty. In this paper, we shall assert that not only does the EU legal certainty principle encapsulate an efficiency rationale, but most importantly, it has been interpreted by the ECJ as such. The economic perspective of the principle of legal certainty in the European context has, so far, never been adopted. Hence, we intend to fill in this gap and propose the representation of the principle of legal certainty as a principle of economic efficiency. After having deciphered the principle of legal certainty from a law and economics perspective (1), we shall delve into the jurisprudence of the ECJ so that the judicial reasoning of the Court as this reasoning proves the relevance of the proposed representation (2). Finally, we conclude in light of the findings of this paper (3).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The process of constituency boundary revision in Ireland, designed to satisfy what is perceived as a rigid requirement that a uniform deputy-population ratio be maintained across constituencies, has traditionally consumed a great deal of the time of politicians and officials. For almost two decades after a High Court ruling in 1961, the process was a political one, was highly contentious, and was marked by serious allegations of ministerial gerrymandering. The introduction in 1979 of constituency commissions made up of officials neutralised, for the most part, charges that the system had become too politicised, but it continued the process of micro-management of constituency boundaries. This article suggests that the continuing problems caused by this system – notably, the permanently changing nature of constituency boundaries and resulting difficulties of geographical identification – could be resolved by reversion to the procedure that is normal in proportional representation systems: periodic post-census allocation of seats to constituencies whose boundaries are based on those of recognised local government units and which are stable over time. This reform, replacing the principle of redistricting by the principle of reapportionment, would result in more recognisable constituencies, more predictable boundary trajectories over time, and a more efficient, fairer, and speedier process of revision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a parametric semilinear Dirichlet problem driven by the Laplacian plus an indefinite unbounded potential and with a reaction of superdifissive type. Using variational and truncation techniques, we show that there exists a critical parameter value λ_{∗}>0 such that for all λ> λ_{∗} the problem has least two positive solutions, for λ= λ_{∗} the problem has at least one positive solutions, and no positive solutions exist when λ∈(0,λ_{∗}). Also, we show that for λ≥ λ_{∗} the problem has a smallest positive solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a (p, q)− equation (1 < q < p, p ≥ 2) with a parametric concave term and a (p − 1)− linear perturbation. We show that the problem have five nontrivial smooth solutions: four of constant sign and the fifth nodal. When q = 2 (i.e., (p, 2) equation) we show that the problem has six nontrivial smooth solutions, but we do not specify the sign of the sixth solution. Our approach uses variational methods, together with truncation and comparison techniques and Morse theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this paper is to extend the generalized variational problem of Herglotz type to the more general context of the Euclidean sphere S^n. Motivated by classical results on Euclidean spaces, we derive the generalized Euler-Lagrange equation for the corresponding variational problem defined on the Riemannian manifold S^n. Moreover, the problem is formulated from an optimal control point of view and it is proved that the Euler-Lagrange equation can be obtained from the Hamiltonian equations. It is also highlighted the geodesic problem on spheres as a particular case of the generalized Herglotz problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From an ethical perspective, clinical research involving humans is only acceptable if it involves the potential for benefit. Various characteristics can be applied to differentiate research benefit. Often benefit is categorized in direct or indirect benefit, whereby indirect benefit might be further differentiated in collective or benefit for the society, excluding or including the trial patient in the long term. Ethical guidelines, such as the Declaration of Helsinki in its latest version, do not precisely favor a particular type of benefit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The second generation of large scale interferometric gravitational wave (GW) detectors will be limited by quantum noise over a wide frequency range in their detection band. Further sensitivity improvements for future upgrades or new detectors beyond the second generation motivate the development of measurement schemes to mitigate the impact of quantum noise in these instruments. Two strands of development are being pursued to reach this goal, focusing both on modifications of the well-established Michelson detector configuration and development of different detector topologies. In this paper, we present the design of the world's first Sagnac speed meter (SSM) interferometer, which is currently being constructed at the University of Glasgow. With this proof-of-principle experiment we aim to demonstrate the theoretically predicted lower quantum noise in a Sagnac interferometer compared to an equivalent Michelson interferometer, to qualify SSM for further research towards an implementation in a future generation large scale GW detector, such as the planned Einstein telescope observatory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to exhibit a necessary and sufficient condition of optimality for functionals depending on fractional integrals and derivatives, on indefinite integrals and on presence of time delay. We exemplify with one example, where we nd analytically the minimizer.