936 resultados para Two degrees of freedom


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pertinent domestic and international developments involving issues related to tensions affecting religious or belief communities have been increasingly occupying the international law agenda. Those who generate and, thus, shape international law jurisprudence are in the process of seeking some of the answers to these questions. Thus the need for reconceptualization of the right to freedom of religion or belief continues as demands to the right to freedom of religion or belief challenge the boundaries of religious freedom in national and international law. This thesis aims to contribute to the process of “re-conceptualization” by exploring the notion of the collective dimension of freedom of religion or belief with a view to advance the protection of the right to freedom of religion or belief. The case of Turkey provides a useful test case where both the domestic legislation can be assessed against international standards, while at the same time lessons can be drawn for the improvement of the standard of international review of the protection of the collective dimension of freedom of religion or belief. The right to freedom of religion or belief, as enshrined in international human rights documents, is unique in its formulation in that it provides protection for the enjoyment of the rights “in community with others”.1 It cannot be realized in isolation; it crosses categories of human rights with aspects that are individual, aspects that can be effectively realized only in an organized community of individuals and aspects that belong to the field of economic, social and cultural rights such as those related to religious or moral education. This study centers on two primary questions; first, what is the scope and nature of protection afforded to the collective dimension of freedom of religion or belief in international law, and, secondly, how does the protection of the collective dimension of freedom of religion or belief in Turkey compare and contrast to international standards? Section I explores and examines the notion of the collective dimension of freedom of religion or belief, and the scope of its protection in international law with particular reference to the right to acquire legal personality and autonomy religious/belief communities. In Section II, the case study on Turkey constitutes the applied part of the thesis; here, the protection of the collective dimension is assessed with a view to evaluate the compliance of Turkish legislation and practice with international norms as well as seeking to identify how the standard of international review of the collective dimension of freedom of religion or belief can be improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In experimental studies, several parameters, such as body weight, body mass index, adiposity index, and dual-energy X-ray absorptiometry, have commonly been used to demonstrate increased adiposity and investigate the mechanisms underlying obesity and sedentary lifestyles. However, these investigations have not classified the degree of adiposity nor defined adiposity categories for rats, such as normal, overweight, and obese. The aim of the study was to characterize the degree of adiposity in rats fed a high-fat diet using cluster analysis and to create adiposity intervals in an experimental model of obesity. Thirty-day-old male Wistar rats were fed a normal (n=41) or a high-fat (n=43) diet for 15 weeks. Obesity was defined based on the adiposity index; and the degree of adiposity was evaluated using cluster analysis. Cluster analysis allowed the rats to be classified into two groups (overweight and obese). The obese group displayed significantly higher total body fat and a higher adiposity index compared with those of the overweight group. No differences in systolic blood pressure or nonesterified fatty acid, glucose, total cholesterol, or triglyceride levels were observed between the obese and overweight groups. The adiposity index of the obese group was positively correlated with final body weight, total body fat, and leptin levels. Despite the classification of sedentary rats into overweight and obese groups, it was not possible to identify differences in the comorbidities between the two groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Finnish IT service market can be described to be at a turning point. The clients are ever more interested on services delivered from offshore but certain issues keep them cautious. There is a lack of knowledge on what implications different degrees of offshoring have on service quality. Although there has been significant amount of research related to both service quality and offshoring, several questions are unanswered, terminology remains ambivalent and research findings are inconsistent. The study focuses on the interception of these two fields. The purpose of the study is to learn more about service quality in different degrees of offshoring. At the same time it aims to contribute in narrowing the research gaps. The degree of offshoring can be divided to three delivery modes: onshore, collaboration and offshore. The study takes a mixed method approach where the quantitative and qualitative phases are executed sequentially. First data was gathered from incident management system. Resolution time in different degrees of offshoring was analyzed with Kruskal-Wallis and Jonckheere-Terpstra tests. In addition, the compliance to Service Level Agreement (SLA) in different degrees of offshoring was examined with cross tabulation. The findings from the quantitative analysis suggested that the services with offshore delivery mode perform the best in terms of promptness and SLA compliance. However, several issues were found related to the data and for that reason, the findings should be considered with prudence. After the quantitative analysis, the study moved on to qualitative data collection and analysis. Four semi-structured interviews were held. The interviewees represented different organizational roles and had experiences from different delivery modes. Several themes were covered in the interviews, including: the concept of quality, the subjectivity or objectivity of service quality, expectations and prejudices towards offshore deliveries, quality produced in India, proactiveness of offshore resources, quality indicators and the scarcity of collaborative deliveries. Several conclusions can be made from the empirical research. Firstly, the quality in different delivery modes was found to be controversial topic. Secondly, in the collaborative delivery covered in the study, the way tasks and resources are allocated seem to cause issues. On the other hand inexperienced offshore resources are assigned to the delivery and on the other hand only routine tasks are assigned to the resources. This creates a self-enforcing loop that results in low motivation, low ownership and high employee turnover in offshore. Nevertheless, this issue is not characteristic only to collaborative deliveries but rather allocation of tasks and resources. Moreover, prejudices were identified to affect the perceived service quality in non-predictable way. The research also demonstrated that there is a need in focal company for further data gathering and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation describes an approach for developing a real-time simulation for working mobile vehicles based on multibody modeling. The use of multibody modeling allows comprehensive description of the constrained motion of the mechanical systems involved and permits real-time solving of the equations of motion. By carefully selecting the multibody formulation method to be used, it is possible to increase the accuracy of the multibody model while at the same time solving equations of motion in real-time. In this study, a multibody procedure based on semi-recursive and augmented Lagrangian methods for real-time dynamic simulation application is studied in detail. In the semirecursive approach, a velocity transformation matrix is introduced to describe the dependent coordinates into relative (joint) coordinates, which reduces the size of the generalized coordinates. The augmented Lagrangian method is based on usage of global coordinates and, in that method, constraints are accounted using an iterative process. A multibody system can be modelled as either rigid or flexible bodies. When using flexible bodies, the system can be described using a floating frame of reference formulation. In this method, the deformation mode needed can be obtained from the finite element model. As the finite element model typically involves large number of degrees of freedom, reduced number of deformation modes can be obtained by employing model order reduction method such as Guyan reduction, Craig-Bampton method and Krylov subspace as shown in this study The constrained motion of the working mobile vehicles is actuated by the force from the hydraulic actuator. In this study, the hydraulic system is modeled using lumped fluid theory, in which the hydraulic circuit is divided into volumes. In this approach, the pressure wave propagation in the hoses and pipes is neglected. The contact modeling is divided into two stages: contact detection and contact response. Contact detection determines when and where the contact occurs, and contact response provides the force acting at the collision point. The friction between tire and ground is modelled using the LuGre friction model, which describes the frictional force between two surfaces. Typically, the equations of motion are solved in the full matrices format, where the sparsity of the matrices is not considered. Increasing the number of bodies and constraint equations leads to the system matrices becoming large and sparse in structure. To increase the computational efficiency, a technique for solution of sparse matrices is proposed in this dissertation and its implementation demonstrated. To assess the computing efficiency, augmented Lagrangian and semi-recursive methods are implemented employing a sparse matrix technique. From the numerical example, the results show that the proposed approach is applicable and produced appropriate results within the real-time period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of peak-dose drug-induced dyskinesia (DID) on manual tracking (MT) was examined in 10 dyskinetic patients (OPO), and compared to 10 age/gendermatched non-dyskinetic patients (NDPD) and 10 healthy controls. Whole body movement (WBM) and MT were recorded with a 6-degrees of freedom magnetic motion tracker and forearm rotation sensors, respectively. Subjects were asked to match the length of a computer-generated line with a line controlled via wrist rotation. Results show that OPO patients had greater WBM displacement and velocity than other groups. All groups displayed increased WBM from rest to MT, but only DPD and NDPO patients demonstrated a significant increase in WBM displacement and velocity. In addition, OPO patients exhibited excessive increase in WBM suggesting overflow DID. When two distinct target pace segments were examined (FAST/SLOW), all groups had slight increases in WBM displacement and velocity from SLOW to FAST, but only OPO patients showed significantly increased WBM displacement and velocity from SLOW to FAST. Therefore, it can be suggested that overflow DID was further increased with increased task speed. OPO patients also showed significantly greater ERROR matching target velocity, but no significant difference in ERROR in displacement, indicating that significantly greater WBM displacement in the OPO group did not have a direct influence on tracking performance. Individual target and performance traces demonstrated this relatively good tracking performance with the exception of distinct deviations from the target trace that occurred suddenly, followed by quick returns to the target coherent in time with increased performance velocity. In addition, performance hand velocity was not correlated with WBM velocity in DPO patients, suggesting that increased ERROR in velocity was not a direct result of WBM velocity. In conclusion, we propose that over-excitation of motor cortical areas, reported to be present in DPO patients, resulted in overflow DID during voluntary movement. Furthermore, we propose that the increased ERROR in velocity was the result of hypermetric voluntary movements also originating from the over-excitation of motor cortical areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Contexte: Les facteurs de risque comportementaux, notamment l’inactivité physique, le comportement sédentaire, le tabagisme, la consommation d’alcool et le surpoids sont les principales causes modifiables de maladies chroniques telles que le cancer, les maladies cardiovasculaires et le diabète. Ces facteurs de risque se manifestent également de façon concomitante chez l’individu et entraînent des risques accrus de morbidité et de mortalité. Bien que les facteurs de risque comportementaux aient été largement étudiés, la distribution, les patrons d’agrégation et les déterminants de multiples facteurs de risque comportementaux sont peu connus, surtout chez les enfants et les adolescents. Objectifs: Cette thèse vise 1) à décrire la prévalence et les patrons d’agrégation de multiples facteurs de risque comportementaux des maladies chroniques chez les enfants et adolescents canadiens; 2) à explorer les corrélats individuels, sociaux et scolaires de multiples facteurs de risque comportementaux chez les enfants et adolescents canadiens; et 3) à évaluer, selon le modèle conceptuel de l’étude, l’influence longitudinale d’un ensemble de variables distales (c’est-à-dire des variables situées à une distance intermédiaire des comportements à risque) de type individuel (estime de soi, sentiment de réussite), social (relations sociales, comportements des parents/pairs) et scolaire (engagement collectif à la réussite, compréhension des règles), ainsi que de variables ultimes (c’est-à-dire des variables situées à une distance éloignée des comportements à risque) de type individuel (traits de personnalité, caractéristiques démographiques), social (caractéristiques socio-économiques des parents) et scolaire (type d’école, environnement favorable, climat disciplinaire) sur le taux d’occurrence de multiples facteurs de risque comportementaux chez les enfants et adolescents canadiens. Méthodes: Des données transversales (n = 4724) à partir du cycle 4 (2000-2001) de l’Enquête longitudinale nationale sur les enfants et les jeunes (ELNEJ) ont été utilisées pour décrire la prévalence et les patrons d’agrégation de multiples facteurs de risque comportementaux chez les jeunes canadiens âgés de 10-17 ans. L’agrégation des facteurs de risque a été examinée en utilisant une méthode du ratio de cas observés sur les cas attendus. La régression logistique ordinale a été utilisée pour explorer les corrélats de multiples facteurs de risque comportementaux dans un échantillon transversal (n = 1747) de jeunes canadiens âgés de 10-15 ans du cycle 4 (2000-2001) de l’ELNEJ. Des données prospectives (n = 1135) à partir des cycle 4 (2000-2001), cycle 5 (2002-2003) et cycle 6 (2004-2005) de l’ELNEJ ont été utilisées pour évaluer l’influence longitudinale des variables distales et ultimes (tel que décrit ci-haut dans les objectifs) sur le taux d’occurrence de multiples facteurs de risque comportementaux chez les jeunes canadiens âgés de 10-15 ans; cette analyse a été effectuée à l’aide des modèles de Poisson longitudinaux. Résultats: Soixante-cinq pour cent des jeunes canadiens ont rapporté avoir deux ou plus de facteurs de risque comportementaux, comparativement à seulement 10% des jeunes avec aucun facteur de risque. Les facteurs de risque comportementaux se sont agrégés en de multiples combinaisons. Plus précisément, l’occurrence simultanée des cinq facteurs de risque était 120% plus élevée chez les garçons (ratio observé/attendu (O/E) = 2.20, intervalle de confiance (IC) 95%: 1.31-3.09) et 94% plus élevée chez les filles (ratio O/E = 1.94, IC 95%: 1.24-2.64) qu’attendu. L’âge (rapport de cotes (RC) = 1.95, IC 95%: 1.21-3.13), ayant un parent fumeur (RC = 1.49, IC 95%: 1.09-2.03), ayant rapporté que la majorité/tous de ses pairs consommaient du tabac (RC = 7.31, IC 95%: 4.00-13.35) ou buvaient de l’alcool (RC = 3.77, IC 95%: 2.18-6.53), et vivant dans une famille monoparentale (RC = 1.94, IC 95%: 1.31-2.88) ont été positivement associés aux multiples comportements à risque. Les jeunes ayant une forte estime de soi (RC = 0.92, IC 95%: 0.85-0.99) ainsi que les jeunes dont un des parents avait un niveau d’éducation postsecondaire (RC = 0.58, IC 95%: 0.41-0.82) étaient moins susceptibles d’avoir de multiples facteurs de risque comportementaux. Enfin, les variables de type social distal (tabagisme des parents et des pairs, consommation d’alcool par les pairs) (Log du rapport de vraisemblance (LLR) = 187.86, degrés de liberté = 8, P < 0,001) et individuel distal (estime de soi) (LLR = 76.94, degrés de liberté = 4, P < 0,001) ont significativement influencé le taux d’occurrence de multiples facteurs de risque comportementaux. Les variables de type individuel ultime (âge, sexe, anxiété) et social ultime (niveau d’éducation du parent, revenu du ménage, structure de la famille) ont eu une influence moins prononcée sur le taux de cooccurrence des facteurs de risque comportementaux chez les jeunes. Conclusion: Les résultats suggèrent que les interventions de santé publique devraient principalement cibler les déterminants de type individuel distal (tel que l’estime de soi) ainsi que social distal (tels que le tabagisme des parents et des pairs et la consommation d’alcool par les pairs) pour prévenir et/ou réduire l’occurrence de multiples facteurs de risque comportementaux chez les enfants et les adolescents. Cependant, puisque les variables de type distal (telles que les caractéristiques psychosociales des jeunes et comportements des parents/pairs) peuvent être influencées par des variables de type ultime (telles que les caractéristiques démographiques et socioéconomiques), les programmes et politiques de prévention devraient également viser à améliorer les conditions socioéconomiques des jeunes, particulièrement celles des enfants et des adolescents des familles les plus démunies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les prothèses myoélectriques modernes peuvent être dotées de plusieurs degrés de liberté ce qui nécessite plusieurs signaux musculaires pour en exploiter pleinement les capacités. Pour obtenir plus de signaux, il nous a semblé prometteur d'expérimenter si les 6 compartiments du biceps brachial pouvaient être mis sous tension de façon volontaire et obtenir ainsi 6 signaux de contrôle au lieu d'un seul comme actuellement. Des expériences ont donc été réalisées avec 10 sujets normaux. Des matrices d'électrodes ont été placées en surface au-dessus du chef court et long du biceps pour recueillir les signaux électromyographiques (EMG) générés par le muscle lors de contractions effectuées alors que les sujets étaient soit assis, le coude droit fléchi ~ 100 ° ou debout avec le bras droit tendu à l'horizontale dans le plan coronal (sur le côté). Dans ces deux positions, la main était soit en supination, soit en position neutre, soit en pronation. L'amplitude des signaux captés au-dessus du chef court du muscle a été comparée à ceux obtenus à partir du chef long. Pour visualiser la forme du biceps sous les électrodes l'imagerie ultrasonore a été utilisée. En fonction de la tâche à accomplir, l'activité EMG a était plus importante soit dans un chef ou dans l'autre. Le fait de pouvoir activer préférentiellement l'un des 2 chefs du biceps, même si ce n'est pas encore de façon complètement indépendante, suggère que l'utilisation sélective des compartiments pourrait être une avenue possible pour faciliter le contrôle des prothèses myoélectriques du membre supérieur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hat Stiffened Plates are used in composite ships and are gaining popularity in metallic ship construction due to its high strength-to-weight ratio. Light weight structures will result in greater payload, higher speeds, reduced fuel consumption and environmental emissions. Numerical Investigations have been carried out using the commercial Finite Element software ANSYS 12 to substantiate the high strength-to-weight ratio of Hat Stiffened Plates over other open section stiffeners which are commonly used in ship building. Analysis of stiffened plate has always been a matter of concern for the structural engineers since it has been rather difficult to quantify the actual load sharing between stiffeners and plating. Finite Element Method has been accepted as an efficient tool for the analysis of stiffened plated structure. Best results using the Finite Element Method for the analysis of thin plated structures are obtained when both the stiffeners and the plate are modeled using thin plate elements having six degrees of freedom per node. However, one serious problem encountered with this design and analysis process is that the generation of the finite element models for a complex configuration is time consuming and laborious. In order to overcome these difficulties two different methods viz., Orthotropic Plate Model and Superelement for Hat Stiffened Plate have been suggested in the present work. In the Orthotropic Plate Model geometric orthotropy is converted to material orthotropy i.e., the stiffeners are smeared and they vanish from the field of analysis and the structure can be analysed using any commercial Finite Element software which has orthotropic elements in its element library. The Orthotropic Plate Model developed has predicted deflection, stress and linear buckling load with sufficiently good accuracy in the case of all four edges simply supported boundary condition. Whereas, in the case of two edges fixed and other two edges simply supported boundary condition even though the stress has been predicted with good accuracy there has been large variation in the deflection predicted. This variation in the deflection predicted is because, for the Orthotropic Plate Model the rigidity is uniform throughout the plate whereas in the actual Hat Stiffened Plate the rigidity along the line of attachment of the stiffeners to the plate is large as compared to the unsupported portion of the plate. The Superelement technique is a method of treating a portion of the structure as if it were a single element even though it is made up of many individual elements. The Superelement has predicted the deflection and in-plane stress of Hat Stiffened Plate with sufficiently good accuracy for different boundary conditions. Formulation of Superelement for composite Hat Stiffened Plate has also been presented in the thesis. The capability of Orthotropic Plate Model and Superelement to handle typical boundary conditions and characteristic loads in a ship structure has been demonstrated through numerical investigations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An electronic theory is developed, which describes the ultrafast demagnetization in itinerant ferromagnets following the absorption of a femtosecond laser pulse. The present work intends to elucidate the microscopic physics of this ultrafast phenomenon by identifying its fundamental mechanisms. In particular, it aims to reveal the nature of the involved spin excitations and angular-momentum transfer between spin and lattice, which are still subjects of intensive debate. In the first preliminary part of the thesis the initial stage of the laser-induced demagnetization process is considered. In this stage the electronic system is highly excited by spin-conserving elementary excitations involved in the laser-pulse absorption, while the spin or magnon degrees of freedom remain very weakly excited. The role of electron-hole excitations on the stability of the magnetic order of one- and two-dimensional 3d transition metals (TMs) is investigated by using ab initio density-functional theory. The results show that the local magnetic moments are remarkably stable even at very high levels of local energy density and, therefore, indicate that these moments preserve their identity throughout the entire demagnetization process. In the second main part of the thesis a many-body theory is proposed, which takes into account these local magnetic moments and the local character of the involved spin excitations such as spin fluctuations from the very beginning. In this approach the relevant valence 3d and 4p electrons are described in terms of a multiband model Hamiltonian which includes Coulomb interactions, interatomic hybridizations, spin-orbit interactions, as well as the coupling to the time-dependent laser field on the same footing. An exact numerical time evolution is performed for small ferromagnetic TM clusters. The dynamical simulations show that after ultra-short laser pulse absorption the magnetization of these clusters decreases on a time scale of hundred femtoseconds. In particular, the results reproduce the experimentally observed laser-induced demagnetization in ferromagnets and demonstrate that this effect can be explained in terms of the following purely electronic non-adiabatic mechanism: First, on a time scale of 10–100 fs after laser excitation the spin-orbit coupling yields local angular-momentum transfer between the spins and the electron orbits, while subsequently the orbital angular momentum is very rapidly quenched in the lattice on the time scale of one femtosecond due to interatomic electron hoppings. In combination, these two processes result in a demagnetization within hundred or a few hundred femtoseconds after laser-pulse absorption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current force feedback, haptic interface devices are generally limited to the display of low frequency, high amplitude spatial data. A typical device consists of a low impedance framework of one or more degrees-of-freedom (dof), allowing a user to explore a pre-defined workspace via an end effector such as a handle, thimble, probe or stylus. The movement of the device is then constrained using high gain positional feedback, thus reducing the apparent dof of the device and conveying the illusion of hard contact to the user. Such devices are, however, limited to a narrow bandwidth of frequencies, typically below 30Hz, and are not well suited to the display of surface properties, such as object texture. This paper details a device to augment an existing force feedback haptic display with a vibrotactile display, thus providing a means of conveying low amplitude, high frequency spatial information of object surface properties. 1. Haptics and Haptic Interfaces Haptics is the study of human touch and interaction with the external environment via touch. Information from the human sense of touch can be classified in to two categories, cutaneous and kinesthetic. Cutaneous information is provided via the mechanoreceptive nerve endings in the glabrous skin of the human hand. It is primarily a means of relaying information regarding small-scale details in the form of skin stretch, compression and vibration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the behavior of a two-dimensional inviscid and incompressible flow when pushed out of dynamical equilibrium. We use the two-dimensional vorticity equation with spectral truncation on a rectangular domain. For a sufficiently large number of degrees of freedom, the equilibrium statistics of the flow can be described through a canonical ensemble with two conserved quantities, energy and enstrophy. To perturb the system out of equilibrium, we change the shape of the domain according to a protocol, which changes the kinetic energy but leaves the enstrophy constant. We interpret this as doing work to the system. Evolving along a forward and its corresponding backward process, we find numerical evidence that the distributions of the work performed satisfy the Crooks relation. We confirm our results by proving the Crooks relation for this system rigorously.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the approximation of harmonic functions by means of harmonic polynomials in two-dimensional, bounded, star-shaped domains. Assuming that the functions possess analytic extensions to a delta-neighbourhood of the domain, we prove exponential convergence of the approximation error with respect to the degree of the approximating harmonic polynomial. All the constants appearing in the bounds are explicit and depend only on the shape-regularity of the domain and on delta. We apply the obtained estimates to show exponential convergence with rate O(exp(−b square root N)), N being the number of degrees of freedom and b>0, of a hp-dGFEM discretisation of the Laplace equation based on piecewise harmonic polynomials. This result is an improvement over the classical rate O(exp(−b cubic root N )), and is due to the use of harmonic polynomial spaces, as opposed to complete polynomial spaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose and analyse a hybrid numerical–asymptotic hp boundary element method (BEM) for time-harmonic scattering of an incident plane wave by an arbitrary collinear array of sound-soft two-dimensional screens. Our method uses an approximation space enriched with oscillatory basis functions, chosen to capture the high-frequency asymptotics of the solution. We provide a rigorous frequency-explicit error analysis which proves that the method converges exponentially as the number of degrees of freedom N increases, and that to achieve any desired accuracy it is sufficient to increase N in proportion to the square of the logarithm of the frequency as the frequency increases (standard BEMs require N to increase at least linearly with frequency to retain accuracy). Our numerical results suggest that fixed accuracy can in fact be achieved at arbitrarily high frequencies with a frequency-independent computational cost, when the oscillatory integrals required for implementation are computed using Filon quadrature. We also show how our method can be applied to the complementary ‘breakwater’ problem of propagation through an aperture in an infinite sound-hard screen.