927 resultados para Direct and inverse kinematics
Resumo:
Several machining processes have been created and improved in order to achieve the best results ever accomplished in hard and difficult to machine materials. Some of these abrasive manufacturing processes emerging on the science frontier can be defined as ultra-precision grinding. For finishing flat surfaces, researchers have been putting together the main advantages of traditional abrasive processes such as face grinding with constant pressure, fixed abrasives for two-body removal mechanism, total contact of the part with the tool, and lapping kinematics as well as some specific operations to keep grinding wheel sharpness and form. In the present work, both U d-lap grinding process and its machine tool were studied aiming nanometric finishing on flat metallic surfaces. Such hypothesis was investigated on AISI 420 stainless steel workpieces U d-lap ground with different values of overlap factor on dressing (Ud=1, 3, and 5) and grit sizes of conventional grinding wheels (silicon carbide (SiC)=#800, #600, and #300) applying a new machine tool especially designed and built for such finishing. The best results, obtained after 10 min of machining, were average surface roughness (Ra) of 1.92 nm, 1.19-μm flatness deviation of 25.4-mm-diameter workpieces, and mirrored surface finishing. Given the surface quality achieved, the U d-lap grinding process can be included among the ultra-precision abrasive processes and, depending on the application, the chaining steps of grinding, lapping, and polishing can be replaced by the proposed abrasive process.
Resumo:
Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
Im Jahr 2011 wurde am Large Hadron Collider mit dem ATLAS Experiment ein Datensatz von 4.7 inversen Femtobarn bei einer Schwerpunktsenergie von 7 TeV aufgezeichnet. Teil des umfangreichen Physikprogrammes des ATLAS Experiments ist die Suche nach Physik jenseits des Standardmodells. Supersymmetrie - eine neue Symmetrie zwischen Bosonen und Fermionen - wird als aussichtsreichester Kandidat für neue Physik angesehen, und zahlreiche direkte und indirekte Suchen nach Supersymmetrie wurden in den letzten Jahrzehnten bereits durchgeführt. In der folgenden Arbeit wird eine direkte Suche nach Supersymmetrie in Endzuständen mit Jets, fehlender Transversalenergie und genau einem Elektron oder Myon durchgeführt. Der analysierte Datensatz von 4.7 inversen Femtobarn umfasst die gesamte Datenmenge, welche am ATLAS Experiment bei einer Schwerpunktsenergie von 7 TeV aufgezeichnet wurde. Die Ergebnisse der Analyse werden mit verschiedenen anderen leptonischen Suchkanälen kombiniert, um die Sensitivität auf diversen supersymmetrischen Produktions- und Zerfallsmodi zu maximieren. Die gemessenen Daten sind kompatibel mit der Standardmodellerwartung, und neue Ausschlussgrenzen in verschiedenen supersymmetrischen Modellen werden berechnet.
Resumo:
Noninvasive blood flow measurements based on Doppler ultrasound studies are the main clinical tool for studying the cardiovascular status in fetuses at risk for circulatory compromise. Usually, qualitative analysis of peripheral arteries and, in particular clinical situations such as severe growth restriction or volume overload, also of venous vessels close to the heart or of flow patterns in the heart are being used to gauge the level of compensation in a fetus. Quantitative assessment of the driving force of the fetal circulation, the cardiac output, however, remains an elusive goal in fetal medicine. This article reviews the methods for direct and indirect assessment of cardiac function and explains new clinical applications. Part 1 of this review describes the concept of cardiac function and cardiac output and the techniques that have been used to quantify output. Part 2 summarizes the use of arterial and venous Doppler studies in the fetus and gives a detailed description of indirect measures of cardiac function (like indices derived from the duration of segments of the cardiac cycle) with current examples of their application.
Resumo:
This paper uses a survey experiment to examine differences in public attitudes toward 'direct' and 'indirect' government spending. Federal social welfare spending in the USA has two components: the federal government spends money to directly provide social benefits to citizens, and also indirectly subsidizes the private provision of social benefits through tax expenditures. Though benefits provided through tax expenditures are considered spending for budgetary purposes, they differ from direct spending in several ways: in the mechanisms through which benefits are delivered to citizens, in how they distribute wealth across the income spectrum, and in the visibility of their policy consequences to the mass public. We develop and test a model explaining how these differences will affect public attitudes toward spending conducted through direct and indirect means. We find that support for otherwise identical social programs is generally higher when such programs are portrayed as being delivered through tax expenditures than when they are portrayed as being delivered by direct spending. In addition, support for tax expenditure programs which redistribute wealth upward drops when citizens are provided information about the redistributive effects. Both of these results are conditioned by partisanship, with the opinions of Republicans more sensitive to the mechanism through which benefits are delivered, and the opinions of Democrats more sensitive to information about their redistributive effects.
Resumo:
In this paper we present a hybrid method to track human motions in real-time. With simplified marker sets and monocular video input, the strength of both marker-based and marker-free motion capturing are utilized: A cumbersome marker calibration is avoided while the robustness of the marker-free tracking is enhanced by referencing the tracked marker positions. An improved inverse kinematics solver is employed for real-time pose estimation. A computer-visionbased approach is applied to refine the pose estimation and reduce the ambiguity of the inverse kinematics solutions. We use this hybrid method to capture typical table tennis upper body movements in a real-time virtual reality application.
Resumo:
In industrialized countries the prevalence of obesity among women decreases with increasing socioeconomic status. While this relation has been amply documented, its explanation and implications for other causal factors of obesity has received much less attention. Differences in childbearing patterns, norms and attitudes about fatness, dietary behaviors and physical activity are some of the factors that have been proposed to explain the inverse relation.^ The objectives of this investigation were to (1) examine the associations among social characteristics and weight-related attitudes and behaviors, and (2) examine the relations of these factors to weight change and obesity. Information on social characteristics, weight-related attitudes, dietary behaviors, physical activity and childbearing were collected from 304 Mexican American women aged 19 to 50 living in Starr County, Texas, who were at high risk for developing diabetes. Their weights were recorded both at an initial physical examination and at a follow-up interview one to two and one-half years later, permitting the computation of current Body Mass Index (weight/height('2)) and weight change during the interval for each subject. Path analysis was used to examine direct and indirect relations among the variables.^ The major findings were: (1) After controlling for age, childbearing was not an independent predictor of weight change or Body Mass Index. (2) Neither planned exercise nor total daily physical activity were independent predictors of weight change. (3) Women with higher social characteristics scores reported less frequent meals and less use of calorically dense foods, factors associated with lower risk for weight gain. (4) Dietary intake measures were not significantly related to Body Mass Index. However, dietary behaviors (frequency of meals and snacks, use of high and low caloric density foods, eating restraint and disinhibition of restraint) did explain a significant portion (17.4 percent) of the variance in weight change, indicating the importance of using dynamic measures of weight status in studies of the development of obesity. This study highlights factors amenable to intervention to reverse or to prevent weight gain in this population, and thereby reduce the prevalence of diabetes and its sequelae. ^
Resumo:
En esta tesis se presenta un análisis en profundidad de cómo se deben utilizar dos tipos de métodos directos, Lucas-Kanade e Inverse Compositional, en imágenes RGB-D y se analiza la capacidad y precisión de los mismos en una serie de experimentos sintéticos. Estos simulan imágenes RGB, imágenes de profundidad (D) e imágenes RGB-D para comprobar cómo se comportan en cada una de las combinaciones. Además, se analizan estos métodos sin ninguna técnica adicional que modifique el algoritmo original ni que lo apoye en su tarea de optimización tal y como sucede en la mayoría de los artículos encontrados en la literatura. Esto se hace con el fin de poder entender cuándo y por qué los métodos convergen o divergen para que así en el futuro cualquier interesado pueda aplicar los conocimientos adquiridos en esta tesis de forma práctica. Esta tesis debería ayudar al futuro interesado a decidir qué algoritmo conviene más en una determinada situación y debería también ayudarle a entender qué problemas le pueden dar estos algoritmos para poder poner el remedio más apropiado. Las técnicas adicionales que sirven de remedio para estos problemas quedan fuera de los contenidos que abarca esta tesis, sin embargo, sí se hace una revisión sobre ellas.---ABSTRACT---This thesis presents an in-depth analysis about how direct methods such as Lucas- Kanade and Inverse Compositional can be applied in RGB-D images. The capability and accuracy of these methods is also analyzed employing a series of synthetic experiments. These simulate the efects produced by RGB images, depth images and RGB-D images so that diferent combinations can be evaluated. Moreover, these methods are analyzed without using any additional technique that modifies the original algorithm or that aids the algorithm in its search for a global optima unlike most of the articles found in the literature. Our goal is to understand when and why do these methods converge or diverge so that in the future, the knowledge extracted from the results presented here can efectively help a potential implementer. After reading this thesis, the implementer should be able to decide which algorithm fits best for a particular task and should also know which are the problems that have to be addressed in each algorithm so that an appropriate correction is implemented using additional techniques. These additional techniques are outside the scope of this thesis, however, they are reviewed from the literature.
Resumo:
We have redefined group membership of six southern galaxy groups in the local universe (mean cz < 2000 km s(-1)) based on new redshift measurements from our recently acquired Anglo-Australian Telescope 2dF spectra. For each group, we investigate member galaxy kinematics, substructure, luminosity functions and luminosity-weighted dynamics. Our calculations confirm that the group sizes, virial masses and luminosities cover the range expected for galaxy groups, except that the luminosity of NGC 4038 is boosted by the central starburst merger pair. We find that a combination of kinematical, substructural and dynamical techniques can reliably distinguish loose, unvirialized groups from compact, dynamically relaxed groups. Applying these techniques, we find that Dorado, NGC 4038 and NGC 4697 are unvirialized, whereas NGC 681, NGC 1400 and NGC 5084 are dynamically relaxed.
Resumo:
Relationships between clustering, description length, and regularisation are pointed out, motivating the introduction of a cost function with a description length interpretation and the unusual and useful property of having its minimum approximated by the densest mode of a distribution. A simple inverse kinematics example is used to demonstrate that this property can be used to select and learn one branch of a multi-valued mapping. This property is also used to develop a method for setting regularisation parameters according to the scale on which structure is exhibited in the training data. The regularisation technique is demonstrated on two real data sets, a classification problem and a regression problem.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
The scaling problems which afflict attempts to optimise neural networks (NNs) with genetic algorithms (GAs) are disclosed. A novel GA-NN hybrid is introduced, based on the bumptree, a little-used connectionist model. As well as being computationally efficient, the bumptree is shown to be more amenable to genetic coding lthan other NN models. A hierarchical genetic coding scheme is developed for the bumptree and shown to have low redundancy, as well as being complete and closed with respect to the search space. When applied to optimising bumptree architectures for classification problems the GA discovers bumptrees which significantly out-perform those constructed using a standard algorithm. The fields of artificial life, control and robotics are identified as likely application areas for the evolutionary optimisation of NNs. An artificial life case-study is presented and discussed. Experiments are reported which show that the GA-bumptree is able to learn simulated pole balancing and car parking tasks using only limited environmental feedback. A simple modification of the fitness function allows the GA-bumptree to learn mappings which are multi-modal, such as robot arm inverse kinematics. The dynamics of the 'geographic speciation' selection model used by the GA-bumptree are investigated empirically and the convergence profile is introduced as an analytical tool. The relationships between the rate of genetic convergence and the phenomena of speciation, genetic drift and punctuated equilibrium arc discussed. The importance of genetic linkage to GA design is discussed and two new recombination operators arc introduced. The first, linkage mapped crossover (LMX) is shown to be a generalisation of existing crossover operators. LMX provides a new framework for incorporating prior knowledge into GAs.Its adaptive form, ALMX, is shown to be able to infer linkage relationships automatically during genetic search.
Resumo:
In this study, we developed a DEA-based performance measurement methodology that is consistent with performance assessment frameworks such as the Balanced Scorecard. The methodology developed in this paper takes into account the direct or inverse relationships that may exist among the dimensions of performance to construct appropriate production frontiers. The production frontiers we obtained are deemed appropriate as they consist solely of firms with desirable levels for all dimensions of performance. These levels should be at least equal to the critical values set by decision makers. The properties and advantages of our methodology against competing methodologies are presented through an application to a real-world case study from retail firms operating in the US. A comparative analysis between the new methodology and existing methodologies explains the failure of the existing approaches to define appropriate production frontiers when directly or inversely related dimensions of performance are present and to express the interrelationships between the dimensions of performance.
Resumo:
A model was tested to examine relationships among leadership behaviors, team diversity, and team process measures with team performance and satisfaction at both the team and leader-member levels of analysis. Relationships between leadership behavior and team demographic and cognitive diversity were hypothesized to have both direct effects on organizational outcomes as well as indirect effects through team processes. Leader member differences were investigated to determine the effects of leader-member diversity leader-member exchange quality, individual effectiveness and satisfaction.^ Leadership had little direct effect on team performance, but several strong positive indirect effects through team processes. Demographic Diversity had no impact on team processes, directly impacted only one performance measure, and moderated the leadership to team process relationship.^ Cognitive Diversity had a number of direct and indirect effects on team performance, the net effects uniformly positive, and did not moderate the leadership to team process relationship.^ In sum, the team model suggests a complex combination of leadership behaviors positively impacting team processes, demographic diversity having little impact on team process or performance, cognitive diversity having a positive net impact impact, and team processes having mixed effects on team outcomes.^ At the leader-member level, leadership behaviors were a strong predictor of Leader-Member Exchange (LMX) quality. Leader-member demographic and cognitive dissimilarity were each predictors of LMX quality, but failed to moderate the leader behavior to LMX quality relationship. LMX quality was strongly and positively related to self reported effectiveness and satisfaction.^ The study makes several contributions to the literature. First, it explicitly links leadership and team diversity. Second, demographic and cognitive diversity are conceptualized as distinct and multi-faceted constructs. Third, a methodology for creating an index of categorical demographic and interval cognitive measures is provided so that diversity can be measured in a holistic conjoint fashion. Fourth, the study simultaneously investigates the impact of diversity at the team and leader-member levels of analyses. Fifth, insights into the moderating impact of different forms of team diversity on the leadership to team process relationship are provided. Sixth, this study incorporates a wide range of objective and independent measures to provide a 360$\sp\circ$ assessment of team performance. ^