879 resultados para 3D printing,steel bars,calibration of design values,correlation
Resumo:
The combination of two low-cost classical procedures based on titrimetric techniques is presented for the determination of pyridoxine hydrochloride in pharmaceuticals samples. Initially some experiments were carried out aiming to determine both pKa1 and pKa2 values, being those compared to values of literature and theoretical procedures. Commercial samples containing pyridoxine hydrochloride were electrochemically analysed by exploiting their acid-base and precipitation reactions. Potentiometric titrations accomplished the reaction between the ionizable hydrogens present in pyridoxine hydrochloride, being NaOH used as titrant; while the conductimetric method was based on the chemical precipitation between the chloride of pyridoxine hydrochloride molecule and Ag+ ions from de silver nitrate, changing the conductivity of the solution. Both methods were applied to the same commercial samples leading to concordant results when compared by statistical tests (95 and 98% confidence levels). Recoveries ranging from 99.0 to 108.1% were observed, showing no significant interference on the results.
Resumo:
The determination of volumetric water content of soils is an important factor in irrigation management. Among the indirect methods for estimating, the time-domain reflectometry (TDR) technique has received a significant attention. Like any other technique, it has advantages and disadvantages, but its greatest disadvantage is the need of calibration and high cost of acquisition. The main goal of this study was to establish a calibration model for the TDR equipment, Trase System Model 6050X1, to estimate the volumetric water content in a Distroferric Red Latosol. The calibration was carried out in a laboratory with disturbed soil samples under study, packed in PVC columns of a volume of 0.0078m³. The TDR probes were handcrafted with three rods and 0.20m long. They were vertically installed in soil columns, with a total of five probes per column and sixteen columns. The weightings were carried out in a digital scale, while daily readings of dielectric constant were obtained in TDR equipment. The linear model θν = 0.0103 Ka + 0.1900 to estimate the studied volumetric water content showed an excellent coefficient of determination (0.93), enabling the use of probes in indirect estimation of soil moisture.
Resumo:
Tool center point calibration is a known problem in industrial robotics. The major focus of academic research is to enhance the accuracy and repeatability of next generation robots. However, operators of currently available robots are working within the limits of the robot´s repeatability and require calibration methods suitable for these basic applications. This study was conducted in association with Stresstech Oy, which provides solutions for manufacturing quality control. Their sensor, based on the Barkhausen noise effect, requires accurate positioning. The accuracy requirement admits a tool center point calibration problem if measurements are executed with an industrial robot. Multiple possibilities are available in the market for automatic tool center point calibration. Manufacturers provide customized calibrators to most robot types and tools. With the handmade sensors and multiple robot types that Stresstech uses, this would require great deal of labor. This thesis introduces a calibration method that is suitable for all robots which have two digital input ports free. It functions with the traditional method of using a light barrier to detect the tool in the robot coordinate system. However, this method utilizes two parallel light barriers to simultaneously measure and detect the center axis of the tool. Rotations about two axes are defined with the center axis. The last rotation about the Z-axis is calculated for tools that have different width of X- and Y-axes. The results indicate that this method is suitable for calibrating the geometric tool center point of a Barkhausen noise sensor. In the repeatability tests, a standard deviation inside robot repeatability was acquired. The Barkhausen noise signal was also evaluated after recalibration and the results indicate correct calibration. However, future studies should be conducted using a more accurate manipulator, since the method employs the robot itself as a measuring device.
Resumo:
Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.
Resumo:
The conceptualization of childhood has changed over the centuries and appears to be undergoing further change in our post-modern culture. While the United Nations Convention on the Right of the Child is designed to give children everywhere basic human rights while taking into consideration their special needs, no recent research has examined adult attitudes toward those rights. In an attempt to understand the attitudes adults hold regarding autonomy rights and to look for some factors that could predict those attitudes, the current study considers values, parenting style, emotions and the issue of parent status as possible predictor variables. A total of 90 participants took part in the research, which had both written and interview components. Results generally failed to establish a reliable set of predictors. However, some interesting information was obtained regarding the endorsement of children's autonomy rights and some general conclusions were reached about our view of children and their rights at the end of the twentieth century.
Resumo:
The present research study examined the relationships in a work motivation context among perceived importance and achievement of work values, locus of control and internal work motivation. The congruence of a work value was considered to be the discrepancy between the importance of a work value and the perceived achievement of that value. The theoretical framework utilized was based on a self-perpetuating cycle of motivation which included the perceived importance and achievement of work values and internal work motivation as separate and distinct, yet interrelated factors. It was hypothesized that individuals who experienced high congruence of work values would experience higher levels of internal work motivation than individuals who had low congruence of work values. It was also hypothesized that individuals who had an internal locus of control would experience more internal work motivation individuals well, the and have higher congruence of work values than who had an external locus of control. As possibility of locus of control as a moderator between importance of work values and internal work motivation was explored. Survey data were collected from 184 managerial level employees of the XYZ company during an ongoing training session. The following instruments were employed to measure the variables: Elizur's (1984) Importance of Work Values, Hunt and Saul's (1985) Achievement of Work Values, Hatfield, Robinson and Huseman's (1975) Job Perception Scale, a modified version of Rotter's (1966) I-E Locus of Control Scale and the Internal Work Motivation Scale (Hackman & Oldham, 1980) which is a part of the Job Diagnostic Survey. The findings indicated that locus of control was not a significant factor in determining congruence between work values or internal work motivation for this sample. Furthermore, locus of control was also found not to be a moderator between the importance of work values and internal work motivation. All individuals in this study had relatively high levels of internal work motivation. However, individuals who had higher congruence of work values did have significantly higher internal work motivation than those who had low congruence of work values for a majority of the 21 values. This was particularly true for the intrinsic values which included responsibility, meaningfulness and use of abilities. In addition, the data were analysed into a hierarchy of needs to indicate possible organizational development or human resource development needs for the XYZ corporation.
Resumo:
The study examined the intentional use of National Sport Organizations' (NSOs) stated values. Positive Organizational Scholarship (POS) was applied to an Appreciative Inquiry (AI) approach of interviewing NSO senior leaders. One intention of this research was to foster a connection between academia and practitioners, and in so doing highlight the gap between values inaction and values-in-action. Data were collected from nine NSOs through multiple-case studies analysis of interview transcripts, websites, and constitutional statements. Results indicated that while the NSOs operated from a Management by Objectives (MBO) approach they were interested in exploring how Management by Values (MBV) might improve their organization's performance. Eleven themes from the case studies analysis contributed to the development of a framework. The 4-1 framework described how an NSO can progress through different stages by becoming more intentional in how they use their values. Another finding included deepening our understanding of how values are experienced within the NSO and then transferred across the entire sport. Participants also spoke about the tension that arises among their NSO' s values as well as the dominant values held by funding agents. This clash of values needs to be addressed before the tension escalates. Finally, participants expressed a desire to learn more about how values can be used more intentionally to further their organization's purpose. As such, strategies for intentionally leveraging values are also suggested. Further research should explore how helpful the 4-1 framework can be to NSOs leaders who are in the process of identifying or renewing their organization's values.
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Resumo:
In a recent paper A. S. Johal and D. J. Dunstan [Phys. Rev. B 73, 024106 (2006)] have applied multivariate linear regression analysis to the published data of the change in ultrasonic velocity with applied stress. The aim is to obtain the best estimates for the third-order elastic constants in cubic materials. From such an analysis they conclude that uniaxial stress data on metals turns out to be nearly useless by itself. The purpose of this comment is to point out that by a proper analysis of uniaxial stress data it is possible to obtain reliable values of third-order elastic constants in cubic metals and alloys. Cu-based shape memory alloys are used as an illustrative example.
Resumo:
Hindi
Resumo:
Catadioptric sensors are combinations of mirrors and lenses made in order to obtain a wide field of view. In this paper we propose a new sensor that has omnidirectional viewing ability and it also provides depth information about the nearby surrounding. The sensor is based on a conventional camera coupled with a laser emitter and two hyperbolic mirrors. Mathematical formulation and precise specifications of the intrinsic and extrinsic parameters of the sensor are discussed. Our approach overcomes limitations of the existing omni-directional sensors and eventually leads to reduced costs of production
Resumo:
Please contact the tutor Dr Jonathan Faiers for any further information j.faiers@soton.ac.uk
Resumo:
Resumen tomado de la publicaci??n