22 resultados para Degress of Freedom
em Aston University Research Archive
Resumo:
New media platforms have changed the media landscape forever, as they have altered our perceptions of the limits of communication, and reception of information. Platforms such as Facebook, Twitter and WhatsApp enable individuals to circumvent the traditional mass media, converging audience and producer to create millions of ‘citizen journalists’. This new breed of journalist uses these platforms as a way of, not only receiving news, but of instantaneously, and often spontaneously, expressing opinions and venting and sharing emotions, thoughts and feelings. They are liberated from cultural and physical restraints, such as time, space and location, and they are not constrained by factors that impact upon the traditional media, such as editorial control, owner or political bias or the pressures of generating commercial revenue. A consequence of the way in which these platforms have become ingrained within our social culture is that habits, conventions and social norms, that were once informal and transitory manifestations of social life, are now infused within their use. What were casual and ephemeral actions and/or acts of expression, such as conversing with friends or colleagues or swapping/displaying pictures, or exchanging thoughts that were once kept private, or maybe shared with a select few, have now become formalised and potentially permanent, on view for the world to see. Incidentally, ‘traditional’ journalists and media outlets are also utilising new media, as it allows them to react, and disseminate news, instantaneously, within a hyper-competitive marketplace. However, in a world where we are saturated, not only by citizen journalists, but by traditional media outlets, offering access to news and opinion twenty-four hours a day, via multiple new media platforms, there is increased pressure to ‘break’ news fast and first. This paper will argue that new media, and the culture and environment it has created, for citizen journalists, traditional journalists and the media generally, has altered our perceptions of the limits and boundaries of freedom of expression dramatically, and that the corollary to this seismic shift is the impact on the notion of privacy and private life. Consequently, this paper will examine what a reasonable expectation of privacy may now mean, in a new media world.
Resumo:
This paper proposes a novel dc-dc converter topology to achieve an ultrahigh step-up ratio while maintaining a high conversion efficiency. It adopts a three degree of freedom approach in the circuit design. It also demonstrates the flexibility of the proposed converter to combine with the features of modularity, electrical isolation, soft-switching, low voltage stress on switching devices, and is thus considered to be an improved topology over traditional dc-dc converters. New control strategies including the two-section output voltage control and cell idle control are also developed to improve the converter performance. With the cell idle control, the secondary winding inductance of the idle module is bypassed to decrease its power loss. A 400-W dc-dc converter is prototyped and tested to verify the proposed techniques, in addition to a simulation study. The step-up conversion ratio can reach 1:14 with a peak efficiency of 94% and the proposed techniques can be applied to a wide range of high voltage and high power distributed generation and dc power transmission.
Resumo:
The kinematic mapping of a rigid open-link manipulator is a homomorphism between Lie groups. The homomorphisrn has solution groups that act on an inverse kinematic solution element. A canonical representation of solution group operators that act on a solution element of three and seven degree-of-freedom (do!) dextrous manipulators is determined by geometric analysis. Seven canonical solution groups are determined for the seven do! Robotics Research K-1207 and Hollerbach arms. The solution element of a dextrous manipulator is a collection of trivial fibre bundles with solution fibres homotopic to the Torus. If fibre solutions are parameterised by a scalar, a direct inverse funct.ion that maps the scalar and Cartesian base space coordinates to solution element fibre coordinates may be defined. A direct inverse pararneterisation of a solution element may be approximated by a local linear map generated by an inverse augmented Jacobian correction of a linear interpolation. The action of canonical solution group operators on a local linear approximation of the solution element of inverse kinematics of dextrous manipulators generates cyclical solutions. The solution representation is proposed as a model of inverse kinematic transformations in primate nervous systems. Simultaneous calibration of a composition of stereo-camera and manipulator kinematic models is under-determined by equi-output parameter groups in the composition of stereo-camera and Denavit Hartenberg (DH) rnodels. An error measure for simultaneous calibration of a composition of models is derived and parameter subsets with no equi-output groups are determined by numerical experiments to simultaneously calibrate the composition of homogeneous or pan-tilt stereo-camera with DH models. For acceleration of exact Newton second-order re-calibration of DH parameters after a sequential calibration of stereo-camera and DH parameters, an optimal numerical evaluation of DH matrix first order and second order error derivatives with respect to a re-calibration error function is derived, implemented and tested. A distributed object environment for point and click image-based tele-command of manipulators and stereo-cameras is specified and implemented that supports rapid prototyping of numerical experiments in distributed system control. The environment is validated by a hierarchical k-fold cross validated calibration to Cartesian space of a radial basis function regression correction of an affine stereo model. Basic design and performance requirements are defined for scalable virtual micro-kernels that broker inter-Java-virtual-machine remote method invocations between components of secure manageable fault-tolerant open distributed agile Total Quality Managed ISO 9000+ conformant Just in Time manufacturing systems.
Resumo:
Experiments combining different groups or factors are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than simply the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, in a factorial experiment, it is important to define the design of the experiment in detail because this determines the appropriate type of ANOVA. We will discuss some of the common variations of factorial ANOVA in future statnotes. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
Experiments combining different groups or factors and which use ANOVA are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the sample size required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for the error term of the ANOVA testing effects of particular interest. Finally, it is important to always consider the design of the experiment because this determines the appropriate ANOVA to use. Hence, it is necessary to be able to identify the different forms of ANOVA appropriate to different experimental designs and to recognise when a design is a split-plot or incorporates a repeated measure. If there is any doubt about which ANOVA to use in a specific circumstance, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
In recent years, the European Union has come to view cyber security, and in particular, cyber crime as one of the most relevant challenges to the completion of its Area of Freedom, Security and Justice. Given European societies’ increased reliance on borderless and decentralized information technologies, this sector of activity has been identified as an easy target for actors such as organised criminals, hacktivists or terrorist networks. Such analysis has been accompanied by EU calls to step up the fight against unlawful online activities, namely through increased cooperation among law enforcement authorities (both national and extra- communitarian), the approximation of legislations, and public- private partnerships. Although EU initiatives in this field have, so far, been characterized by a lack of interconnection and an integrated strategy, there has been, since the mid- 2000s, an attempt to develop a more cohesive and coordinated policy. An important part of this policy is connected to the activities of Europol, which have come to assume a central role in the coordination of intelligence gathering and analysis of cyber crime. The European Cybercrime Center (EC3), which will become operational within Europol in January 2013, is regarded, in particular, as a focal point of the EU’s fight against this phenomenon. Bearing this background in mind, the present article wishes to understand the role of Europol in the development of a European policy to counter the illegal use of the internet. The article proposes to reach this objective by analyzing, through the theoretical lenses of experimental governance, the evolution of this agency’s activities in the area of cyber crime and cyber security, its positioning as an expert in the field, and the consequences for the way this policy is currently developing and is expected to develop in the near future.
Resumo:
The main aim of this thesis is to investigate the application of methods of differential geometry to the constraint analysis of relativistic high spin field theories. As a starting point the coordinate dependent descriptions of the Lagrangian and Dirac-Bergmann constraint algorithms are reviewed for general second order systems. These two algorithms are then respectively employed to analyse the constraint structure of the massive spin-1 Proca field from the Lagrangian and Hamiltonian viewpoints. As an example of a coupled field theoretic system the constraint analysis of the massive Rarita-Schwinger spin-3/2 field coupled to an external electromagnetic field is then reviewed in terms of the coordinate dependent Dirac-Bergmann algorithm for first order systems. The standard Velo-Zwanziger and Johnson-Sudarshan inconsistencies that this coupled system seemingly suffers from are then discussed in light of this full constraint analysis and it is found that both these pathologies degenerate to a field-induced loss of degrees of freedom. A description of the geometrical version of the Dirac-Bergmann algorithm developed by Gotay, Nester and Hinds begins the geometrical examination of high spin field theories. This geometric constraint algorithm is then applied to the free Proca field and to two Proca field couplings; the first of which is the minimal coupling to an external electromagnetic field whilst the second is the coupling to an external symmetric tensor field. The onset of acausality in this latter coupled case is then considered in relation to the geometric constraint algorithm.
Resumo:
This thesis explores the processes of team innovation. It utilises two studies, an organisationally based pilot and an experimental study, to examine and identify aspects of teams' behaviours that are important for successful innovative outcome. The pilot study, based in two automotive manufacturers, involved the collection of team members' experiences through semi-structured interviews, and identified a number of factors that affected teams' innovative performance. These included: the application of ideative & dissemination processes; the importance of good team relationships, especially those of a more informal nature, in facilitating information and ideative processes; the role of external linkages in enhancing quality and radicality of innovations; and the potential attenuation of innovative ideas by time deadlines. This study revealed a number key team behaviours that may be important in successful innovation outcomes. These included; goal setting, idea generation and development, external contact, task and personal information exchange, leadership, positive feedback and resource deployment. These behaviours formed the basis of a coding system used in the second part of the research. Building on the results from the field based research, an experimental study was undertaken to examine the behavioural differences between three groups of sixteen teams undertaking innovative an task to produce an anti-drugs poster. They were randomly assigned to one of three innovation category conditions suggested by King and Anderson (1990), emergent, imported and imposed. These conditions determined the teams level of access to additional information on previously successful campaigns and the degree of freedom they had with regarding to the design of the poster. In addition, a further experimental condition was imposed on half of the teams per category which involved a formal time deadline for task completion. The teams were video taped for the duration of their innovation and their behaviours analysed and coded in five main aspects including; ideation, external focus, goal setting, interpersonal, directive and resource related activities. A panel of experts, utilising five scales developed from West and Anderson's (1996) innovation outcome measures, assessed the teams' outputs. ANOVAs and repeated measure ANOVAs were deployed to identify whether there were significant differences between the different conditions. The results indicated that there were some behavioural differences between the categories and that over the duration of the task behavioural changes were identified. The results, however, revealed a complex picture and suggested limited support for three distinctive innovation categories. There were many differences in behaviours, but rarely between more than two of the categories. A main finding was the impact that different levels of constraint had in changing teams' focus of attention. For example, emergent teams were found to use both their own team and external resources, whilst those who could import information about other successful campaigns were likely to concentrate outside the team and pay limited attention to the internal resources available within the team. In contrast, those operating under task constraints with aspects of the task imposed onto them were more likely to attend to internal team resources and pay limited attention to the external world. As indicated by the earlier field study, time deadlines did significantly change teams' behaviour, reducing ideative and information exchange behaviours. The model shows an important behavioural progression related to innovate teams. This progression involved the teams' openness initially to external sources, and then to the intra-team environment. Premature closure on the final idea before their mid-point was found to have a detrimental impact on team's innovation. Ideative behaviour per se was not significant for innovation outcome, instead the development of intra-team support and trust emerged as crucial. Analysis of variance revealed some limited differentiation between the behaviours of teams operating under the aforementioned three innovation categories. There were also distinct detrimental differences in the behaviour of those operating under a time deadline. Overall, the study identified the complex interrelationships of team behaviours and outcomes, and between teams and their context.
Resumo:
A Jeffcott rotor consists of a disc at the centre of an axle supported at its end by bearings. A bolted Jeffcott rotor is formed by two discs, each with a shaft on one side. The discs are held together by spring loaded bolts near the outer edge. When the rotor turns there is tendency for the discs to separate on one side. This effect is more marked if the rotor is unbalanced, especially at resonance speeds. The equations of motion of the system have been developed with four degrees of freedom to include the rotor and bearing movements in the respective axes. These equations which include non-linear terms caused by the rotor opening, are subjected to external force such from rotor imbalance. A simulation model based on these equations was created using SIMULINK. An experimental test rig was used to characterise the dynamic features. Rotor discs open at a lateral displacement of the rotor of 0.8 mm. This is the threshold value used to show the change of stiffness from high stiffness to low stiffness. The experimental results, which measure the vibration amplitude of the rotor, show the dynamic behaviour of the bolted rotor due to imbalance. Close agreement of the experimental and theoretical results from time histories, waterfall plots, pseudo-phase plots and rotor orbit plot, indicated the validity of the model and existence of the non-linear jump phenomenon.
Resumo:
The analysis and prediction of the dynamic behaviour of s7ructural components plays an important role in modern engineering design. :n this work, the so-called "mixed" finite element models based on Reissnen's variational principle are applied to the solution of free and forced vibration problems, for beam and :late structures. The mixed beam models are obtained by using elements of various shape functions ranging from simple linear to complex cubic and quadratic functions. The elements were in general capable of predicting the natural frequencies and dynamic responses with good accuracy. An isoparametric quadrilateral element with 8-nodes was developed for application to thin plate problems. The element has 32 degrees of freedom (one deflection, two bending and one twisting moment per node) which is suitable for discretization of plates with arbitrary geometry. A linear isoparametric element and two non-conforming displacement elements (4-node and 8-node quadrilateral) were extended to the solution of dynamic problems. An auto-mesh generation program was used to facilitate the preparation of input data required by the 8-node quadrilateral elements of mixed and displacement type. Numerical examples were solved using both the mixed beam and plate elements for predicting a structure's natural frequencies and dynamic response to a variety of forcing functions. The solutions were compared with the available analytical and displacement model solutions. The mixed elements developed have been found to have significant advantages over the conventional displacement elements in the solution of plate type problems. A dramatic saving in computational time is possible without any loss in solution accuracy. With beam type problems, there appears to be no significant advantages in using mixed models.
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.
Resumo:
Prior to the development of a production standard control system for ML Aviation's plan-symmetric remotely piloted helicopter system, SPRITE, optimum solutions to technical requirements had yet to be found for some aspects of the work. This thesis describes an industrial project where solutions to real problems have been provided within strict timescale constraints. Use has been made of published material wherever appropriate, new solutions have been contributed where none existed previously. A lack of clearly defined user requirements from potential Remotely Piloted Air Vehicle (RPAV) system users is identified, A simulation package is defined to enable the RPAV designer to progress with air vehicle and control system design, development and evaluation studies and to assist the user to investigate his applications. The theoretical basis of this simulation package is developed including Co-axial Contra-rotating Twin Rotor (CCTR), six degrees of freedom motion, fuselage aerodynamics and sensor and control system models. A compatible system of equations is derived for modelling a miniature plan-symmetric helicopter. Rigorous searches revealed a lack of CCTR models, based on closed form expressions to obviate integration along the rotor blade, for stabilisation and navigation studies through simulation. An economic CCTR simulation model is developed and validated by comparison with published work and practical tests. Confusion in published work between attitude and Euler angles is clarified. The implementation of package is discussed. dynamic adjustment of assessment. the theory into a high integrity software Use is made of a novel technique basing the integration time step size on error Simulation output for control system stability verification, cross coupling of motion between control channels and air vehicle response to demands and horizontal wind gusts studies are presented. Contra-Rotating Twin Rotor Flight Control System Remotely Piloted Plan-Symmetric Helicopter Simulation Six Degrees of Freedom Motion ( i i)
Resumo:
The finite element process is now used almost routinely as a tool of engineering analysis. From early days, a significant effort has been devoted to developing simple, cost effective elements which adequately fulfill accuracy requirements. In this thesis we describe the development and application of one of the simplest elements available for the statics and dynamics of axisymmetric shells . A semi analytic truncated cone stiffness element has been formulated and implemented in a computer code: it has two nodes with five degrees of freedom at each node, circumferential variations in displacement field are described in terms of trigonometric series, transverse shear is accommodated by means of a penalty function and rotary inertia is allowed for. The element has been tested in a variety of applications in the statics and dynamics of axisymmetric shells subjected to a variety of boundary conditions. Good results have been obtained for thin and thick shell cases .
Resumo:
The purpose of the present study is to make a comparative evaluation of the legislative controls on unfairness in the context of B2B, B2C and small businesses contracts in England and Brazil. This work will focus on the examination of statutes and relevant case law which regulate exemption clauses and terms on the basis of their ‘unfairness’. The approach adopted by legislation and courts towards the above controls may vary according to the type of contract. Business contracts are more in line with the classical model of contract law according to which parties are presumably equals and able to negotiate terms. As a consequence interventions should be avoided for the sake of freedom of contract even if harmful terms were included. Such assumption of equality however is not applicable to small businesses contracts because SMEs are often in a disadvantageous position in relation to their larger counterparties. Consumer contracts in their turn are more closely regulated by the English and Brazilian legal systems which recognised that vulnerable parties are more exposed to unfair terms imposed by the stronger party as a result of the inequality of bargaining power. For this reason those jurisdictions adopted a more interventionist approach to provide special protection to consumers which is in line with the modern law of contract. The contribution of this work therefore consists of comparing how the law of England and Brazil tackles the problem of ‘unfairness’ in the above types of contracts. This study will examine the differences and similarities between rules and concepts of both jurisdictions with references to the law of their respective regional trade agreements (EU and the Mercosul). Moreover it will identify existing issues in the English and Brazilian legislation and recommend lessons that one system can learn from the other.