912 resultados para Lagrange multiplier principle
Resumo:
Numerous works have been conducted on modelling basic compliant elements such as wire beams, and closed-form analytical models of most basic compliant elements have been well developed. However, the modelling of complex compliant mechanisms is still a challenging work. This paper proposes a constraint-force-based (CFB) modelling approach to model compliant mechanisms with a particular emphasis on modelling complex compliant mechanisms. The proposed CFB modelling approach can be regarded as an improved free-body- diagram (FBD) based modelling approach, and can be extended to a development of the screw-theory-based design approach. A compliant mechanism can be decomposed into rigid stages and compliant modules. A compliant module can offer elastic forces due to its deformation. Such elastic forces are regarded as variable constraint forces in the CFB modelling approach. Additionally, the CFB modelling approach defines external forces applied on a compliant mechanism as constant constraint forces. If a compliant mechanism is at static equilibrium, all the rigid stages are also at static equilibrium under the influence of the variable and constant constraint forces. Therefore, the constraint force equilibrium equations for all the rigid stages can be obtained, and the analytical model of the compliant mechanism can be derived based on the constraint force equilibrium equations. The CFB modelling approach can model a compliant mechanism linearly and nonlinearly, can obtain displacements of any points of the rigid stages, and allows external forces to be exerted on any positions of the rigid stages. Compared with the FBD based modelling approach, the CFB modelling approach does not need to identify the possible deformed configuration of a complex compliant mechanism to obtain the geometric compatibility conditions and the force equilibrium equations. Additionally, the mathematical expressions in the CFB approach have an easily understood physical meaning. Using the CFB modelling approach, the variable constraint forces of three compliant modules, a wire beam, a four-beam compliant module and an eight-beam compliant module, have been derived in this paper. Based on these variable constraint forces, the linear and non-linear models of a decoupled XYZ compliant parallel mechanism are derived, and verified by FEA simulations and experimental tests.
Resumo:
This work examines independence in the Canadian justice system using an approach adapted from new legal realist scholarship called ‘dynamic realism’. This approach proposes that issues in law must be considered in relation to their recursive and simultaneous development with historic, social and political events. Such events describe ‘law in action’ and more holistically demonstrate principles like independence, rule of law and access to justice. My dynamic realist analysis of independence in the justice system employs a range methodological tools and approaches from the social sciences, including: historical and historiographical study; public administrative; policy and institutional analysis; an empirical component; as well as constitutional, statutory interpretation and jurisprudential analysis. In my view, principles like independence represent aspirational ideals in law which can be better understood by examining how they manifest in legal culture and in the legal system. This examination focuses on the principle and practice of independence for both lawyers and judges in the justice system, but highlights the independence of the Bar. It considers the inter-relation between lawyer independence and the ongoing refinement of judicial independence in Canadian law. It also considers both independence of the Bar and the Judiciary in the context of the administration of justice, and practically illustrates the interaction between these principles through a case study of a specific aspect of the court system. This work also focuses on recent developments in the principle of Bar independence and its relation to an emerging school of professionalism scholarship in Canada. The work concludes by describing the principle of independence as both conditional and dynamic, but rooted in a unitary concept for both lawyers and judges. In short, independence can be defined as impartiality, neutrality and autonomy of legal decision-makers in the justice system to apply, protect and improve the law for what has become its primary normative purpose: facilitating access to justice. While both independence of the Bar and the Judiciary are required to support access to independent courts, some recent developments suggest the practical interactions between independence and access need to be the subject of further research, to better account for both the principles and the practicalities of the Canadian justice system.
Resumo:
Quantitative methods can help us understand how underlying attributes contribute to movement patterns. Applying principal components analysis (PCA) to whole-body motion data may provide an objective data-driven method to identify unique and statistically important movement patterns. Therefore, the primary purpose of this study was to determine if athletes’ movement patterns can be differentiated based on skill level or sport played using PCA. Motion capture data from 542 athletes performing three sport-screening movements (i.e. bird-dog, drop jump, T-balance) were analyzed. A PCA-based pattern recognition technique was used to analyze the data. Prior to analyzing the effects of skill level or sport on movement patterns, methodological considerations related to motion analysis reference coordinate system were assessed. All analyses were addressed as case-studies. For the first case study, referencing motion data to a global (lab-based) coordinate system compared to a local (segment-based) coordinate system affected the ability to interpret important movement features. Furthermore, for the second case study, where the interpretability of PCs was assessed when data were referenced to a stationary versus a moving segment-based coordinate system, PCs were more interpretable when data were referenced to a stationary coordinate system for both the bird-dog and T-balance task. As a result of the findings from case study 1 and 2, only stationary segment-based coordinate systems were used in cases 3 and 4. During the bird-dog task, elite athletes had significantly lower scores compared to recreational athletes for principal component (PC) 1. For the T-balance movement, elite athletes had significantly lower scores compared to recreational athletes for PC 2. In both analyses the lower scores in elite athletes represented a greater range of motion. Finally, case study 4 reported differences in athletes’ movement patterns who competed in different sports, and significant differences in technique were detected during the bird-dog task. Through these case studies, this thesis highlights the feasibility of applying PCA as a movement pattern recognition technique in athletes. Future research can build on this proof-of-principle work to develop robust quantitative methods to help us better understand how underlying attributes (e.g. height, sex, ability, injury history, training type) contribute to performance.
Resumo:
Legal certainty, a feature of the rule of law, constitutes a requirement for the operational necessities of market interactions. But, the compatibility of the principle of legal certainty with ideals such as liberalism and free market economy must not lead to the hastened conclusion that therefore the principle of legal certainty would be compatible and tantamount to the principle of economic efficiency. We intend to analyse the efficiency rationale of an important general principle of EU law—the principle of legal certainty. In this paper, we shall assert that not only does the EU legal certainty principle encapsulate an efficiency rationale, but most importantly, it has been interpreted by the ECJ as such. The economic perspective of the principle of legal certainty in the European context has, so far, never been adopted. Hence, we intend to fill in this gap and propose the representation of the principle of legal certainty as a principle of economic efficiency. After having deciphered the principle of legal certainty from a law and economics perspective (1), we shall delve into the jurisprudence of the ECJ so that the judicial reasoning of the Court as this reasoning proves the relevance of the proposed representation (2). Finally, we conclude in light of the findings of this paper (3).
Resumo:
The process of constituency boundary revision in Ireland, designed to satisfy what is perceived as a rigid requirement that a uniform deputy-population ratio be maintained across constituencies, has traditionally consumed a great deal of the time of politicians and officials. For almost two decades after a High Court ruling in 1961, the process was a political one, was highly contentious, and was marked by serious allegations of ministerial gerrymandering. The introduction in 1979 of constituency commissions made up of officials neutralised, for the most part, charges that the system had become too politicised, but it continued the process of micro-management of constituency boundaries. This article suggests that the continuing problems caused by this system – notably, the permanently changing nature of constituency boundaries and resulting difficulties of geographical identification – could be resolved by reversion to the procedure that is normal in proportional representation systems: periodic post-census allocation of seats to constituencies whose boundaries are based on those of recognised local government units and which are stable over time. This reform, replacing the principle of redistricting by the principle of reapportionment, would result in more recognisable constituencies, more predictable boundary trajectories over time, and a more efficient, fairer, and speedier process of revision.
Resumo:
An Euler-Lagrange particle tracking model, developed for simulating fire atmosphere/sprinkler spray interactions, is described. Full details of the model along with the approximations made and restrictions applying are presented. Errors commonly found in previous formulations of the source terms used in this two-phase approach are described and corrected. In order to demonstrate the capabilities of the model it is applied to the simulation of a fire in a long corridor containing a sprinkler. The simulation presented is three-dimensional and transient and considers mass, momentum and energy transfer between the gaseous atmosphere and injected liquid droplets.
Resumo:
From an ethical perspective, clinical research involving humans is only acceptable if it involves the potential for benefit. Various characteristics can be applied to differentiate research benefit. Often benefit is categorized in direct or indirect benefit, whereby indirect benefit might be further differentiated in collective or benefit for the society, excluding or including the trial patient in the long term. Ethical guidelines, such as the Declaration of Helsinki in its latest version, do not precisely favor a particular type of benefit.
Resumo:
The Herglotz problem is a generalization of the fundamental problem of the calculus of variations. In this paper, we consider a class of non-differentiable functions, where the dynamics is described by a scale derivative. Necessary conditions are derived to determine the optimal solution for the problem. Some other problems are considered, like transversality conditions, the multi-dimensional case, higher-order derivatives and for several independent variables.
Resumo:
The second generation of large scale interferometric gravitational wave (GW) detectors will be limited by quantum noise over a wide frequency range in their detection band. Further sensitivity improvements for future upgrades or new detectors beyond the second generation motivate the development of measurement schemes to mitigate the impact of quantum noise in these instruments. Two strands of development are being pursued to reach this goal, focusing both on modifications of the well-established Michelson detector configuration and development of different detector topologies. In this paper, we present the design of the world's first Sagnac speed meter (SSM) interferometer, which is currently being constructed at the University of Glasgow. With this proof-of-principle experiment we aim to demonstrate the theoretically predicted lower quantum noise in a Sagnac interferometer compared to an equivalent Michelson interferometer, to qualify SSM for further research towards an implementation in a future generation large scale GW detector, such as the planned Einstein telescope observatory.
Resumo:
The thesis is an investigation of the principle of least effort (Zipf 1949 [1972]). The principle is simple (all effort should be least) and universal (it governs the totality of human behavior). Since the principle is also functional, the thesis adopts a functional theory of language as its theoretical framework, i.e. Natural Linguistics. The explanatory system of Natural Linguistics posits that higher principles govern preferences, which, in turn, manifest themselves as concrete, specific processes in a given language. Therefore, the thesis’ aim is to investigate the principle of least effort on the basis of external evidence from English. The investigation falls into the three following strands: the investigation of the principle itself, the investigation of its application in articulatory effort and the investigation of its application in phonological processes. The structure of the thesis reflects the division of its broad aims. The first part of the thesis presents its theoretical background (Chapter One and Chapter Two), the second part of the thesis deals with application of least effort in articulatory effort (Chapter Three and Chapter Four), whereas the third part discusses the principle of least effort in phonological processes (Chapter Five and Chapter Six). Chapter One serves as an introduction, examining various aspects of the principle of least effort such as its history, literature, operation and motivation. It overviews various names which denote least effort, explains the origins of the principle and reviews the literature devoted to the principle of least effort in a chronological order. The chapter also discusses the nature and operation of the principle, providing numerous examples of the principle at work. It emphasizes the universal character of the principle from the linguistic field (low-level phonetic processes and language universals) and the non-linguistic ones (physics, biology, psychology and cognitive sciences), proving that the principle governs human behavior and choices. Chapter Two provides the theoretical background of the thesis in terms of its theoretical framework and discusses the terms used in the thesis’ title, i.e. hierarchy and preference. It justifies the selection of Natural Linguistics as the thesis’ theoretical framework by outlining its major assumptions and demonstrating its explanatory power. As far as the concepts of hierarchy and preference are concerned, the chapter provides their definitions and reviews their various understandings via decision theories and linguistic preference-based theories. Since the thesis investigates the principle of least effort in language and speech, Chapter Three considers the articulatory aspect of effort. It reviews the notion of easy and difficult sounds and discusses the concept of articulatory effort, overviewing its literature as well as various understandings in a chronological fashion. The chapter also presents the concept of articulatory gestures within the framework of Articulatory Phonology. The thesis’ aim is to investigate the principle of least effort on the basis of external evidence, therefore Chapters Four and Six provide evidence in terms of three experiments, text message studies (Chapter Four) and phonological processes in English (Chapter Six). Chapter Four contains evidence for the principle of least effort in articulation on the basis of experiments. It describes the experiments in terms of their predictions and methodology. In particular, it discusses the adopted measure of effort established by means of the effort parameters as well as their status. The statistical methods of the experiments are also clarified. The chapter reports on the results of the experiments, presenting them in a graphical way and discusses their relation to the tested predictions. Chapter Four establishes a hierarchy of speakers’ preferences with reference to articulatory effort (Figures 30, 31). The thesis investigates the principle of least effort in phonological processes, thus Chapter Five is devoted to the discussion of phonological processes in Natural Phonology. The chapter explains the general nature and motivation of processes as well as the development of processes in child language. It also discusses the organization of processes in terms of their typology as well as the order in which processes apply. The chapter characterizes the semantic properties of processes and overviews Luschützky’s (1997) contribution to NP with respect to processes in terms of their typology and incorporation of articulatory gestures in the concept of a process. Chapter Six investigates phonological processes. In particular, it identifies the issues of lenition/fortition definition and process typology by presenting the current approaches to process definitions and their typology. Since the chapter concludes that no coherent definition of lenition/fortition exists, it develops alternative lenition/fortition definitions. The chapter also revises the typology of phonological processes under effort management, which is an extended version of the principle of least effort. Chapter Seven concludes the thesis with a list of the concepts discussed in the thesis, enumerates the proposals made by the thesis in discussing the concepts and presents some questions for future research which have emerged in the course of investigation. The chapter also specifies the extent to which the investigation of the principle of least effort is a meaningful contribution to phonology.
Resumo:
Neste trabalho, generalizamos o Princípio da Mínima Ação proposto por Riewe para sistemas não conservativos, contendo forças dissipativas lineares dependentes de derivadas temporais de qualquer ordem. A Ação generalizada é construída a partir de funções Lagrangianas dependentes de derivadas de ordem inteira e fracionária. Diferente de outras formulações, o uso de derivadas fracionárias permite a construção de Lagrangianas físicas para sistemas não conservativos. Uma Lagrangiana é dita física se fornece relações fisicamente consistentes para o momentum e o Hamiltoniano do sistema. Neste Princípio da Mínima Ação generalizado, as equações de movimento são obtidas a partir da equação de Euler-Lagrange e, tomando-se o limite indo à zero para o intervalo de tempo definindo a Ação. Finalmente, como exemplo de aplicação, formulamos pela primeira vez uma Lagrangiana física para o problema da carga pontual acelerada.
Resumo:
Simplifying the Einstein field equation by assuming the cosmological principle yields a set of differential equations which governs the dynamics of the universe as described in the cosmological standard model. The cosmological principle assumes the space appears the same everywhere and in every direction and moreover, the principle has earned its position as a fundamental assumption in cosmology by being compatible with the observations of the 20th century. It was not until the current century when observations in cosmological scales showed significant deviation from isotropy and homogeneity implying the violation of the principle. Among these observations are the inconsistency between local and non-local Hubble parameter evaluations, baryon acoustic features of the Lyman-α forest and the anomalies of the cosmic microwave background radiation. As a consequence, cosmological models beyond the cosmological principle have been studied vastly; after all, the principle is a hypothesis and as such should frequently be tested as any other assumption in physics. In this thesis, the effects of inhomogeneity and anisotropy, arising as a consequence of discarding the cosmological principle, is investigated. The geometry and matter content of the universe becomes more cumbersome and the resulting effects on the Einstein field equation is introduced. The cosmological standard model and its issues, both fundamental and observational are presented. Particular interest is given to the local Hubble parameter, supernova explosion, baryon acoustic oscillation, and cosmic microwave background observations and the cosmological constant problems. Explored and proposed resolutions emerging by violating the cosmological principle are reviewed. This thesis is concluded by a summary and outlook of the included research papers.
Resumo:
As an immune-inspired algorithm, the Dendritic Cell Algorithm (DCA), produces promising performance in the field of anomaly detection. This paper presents the application of the DCA to a standard data set, the KDD 99 data set. The results of different implementation versions of the DCA, including antigen multiplier and moving time windows, are reported. The real-valued Negative Selection Algorithm (NSA) using constant-sized detectors and the C4.5 decision tree algorithm are used, to conduct a baseline comparison. The results suggest that the DCA is applicable to KDD 99 data set, and the antigen multiplier and moving time windows have the same effect on the DCA for this particular data set. The real-valued NSA with contant-sized detectors is not applicable to the data set. And the C4.5 decision tree algorithm provides a benchmark of the classification performance for this data set.