898 resultados para modelling the robot
Resumo:
Benchmarking techniques have evolved over the years since Xerox’s pioneering visits to Japan in the late 1970s. The focus of benchmarking has also shifted during this period. By tracing in detail the evolution of benchmarking in one specific area of business activity, supply and distribution management, as seen by the participants in that evolution, creates a picture of a movement from single function, cost-focused, competitive benchmarking, through cross-functional, cross-sectoral, value-oriented benchmarking to process benchmarking. As process efficiency and effectiveness become the primary foci of benchmarking activities, the measurement parameters used to benchmark performance converge with the factors used in business process modelling. The possibility is therefore emerging of modelling business processes and then feeding the models with actual data from benchmarking exercises. This would overcome the most common criticism of benchmarking, namely that it intrinsically lacks the ability to move beyond current best practice. In fact the combined power of modelling and benchmarking may prove to be the basic building block of informed business process re-engineering.
Resumo:
The predictive capability of high fidelity finite element modelling, to accurately capture damage and crush behaviour of composite structures, relies on the acquisition of accurate material properties, some of which have necessitated the development of novel approaches. This paper details the measurement of interlaminar and intralaminar fracture toughness, the non-linear shear behaviour of carbon fibre (AS4)/thermoplastic Polyetherketoneketone (PEKK) composite laminates and the utilisation of these properties for the accurate computational modelling of crush. Double-cantilever-beam (DCB), four-point end-notched flexure (4ENF) and Mixed-mode bending (MMB) test configurations were used to determine the initiation and propagation fracture toughness in mode I, mode II and mixed-mode loading, respectively. Compact Tension (CT) and Compact Compression (CC) test samples were employed to determine the intralaminar longitudinal tensile and compressive fracture toughness. V-notched rail shear tests were used to measure the highly non-linear shear behaviour, associated with thermoplastic composites, and fracture toughness. Corresponding numerical models of these tests were developed for verification and yielded good correlation with the experimental response. This also confirmed the accuracy of the measured values which were then employed as input material parameters for modelling the crush behaviour of a corrugated test specimen.
Resumo:
The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.
Resumo:
This article describes the Robot Vision challenge, a competition that evaluates solutions for the visual place classification problem. Since its origin, this challenge has been proposed as a common benchmark where worldwide proposals are measured using a common overall score. Each new edition of the competition introduced novelties, both for the type of input data and subobjectives of the challenge. All the techniques used by the participants have been gathered up and published to make it accessible for future developments. The legacy of the Robot Vision challenge includes data sets, benchmarking techniques, and a wide experience in the place classification research that is reflected in this article.
Resumo:
In this project an optimal pose selection method for the calibration of an overconstrained Cable-Driven Parallel robot is presented. This manipulator belongs to a subcategory of parallel robots, where the classic rigid "legs" are replaced by cables. Cables are flexible elements that bring advantages and disadvantages to the robot modeling. For this reason, there are many open research issues, and the calibration of geometric parameters is one of them. The identification of the geometry of a robot, in particular, is usually called Kinematic Calibration. Many methods have been proposed in the past years for the solution of the latter problem. Although these methods are based on calibration using different kinematic models, when the robot’s geometry becomes more complex, their robustness and reliability decrease. This fact makes the selection of the calibration poses more complicated. The position and the orientation of the endeffector in the workspace become important in terms of selection. Thus, in general, it is necessary to evaluate the robustness of the chosen calibration method, by means, for example, of a parameter such as the observability index. In fact, it is known from the theory, that the maximization of the above mentioned index identifies the best choice of calibration poses, and consequently, using this pose set may improve the calibration process. The objective of this thesis is to analyze optimization algorithms which aim to calculate an optimal choice of poses both in quantitative and qualitative terms. Quantitatively, because it is of fundamental importance to understand how many poses are needed. Not necessarily a greater number of poses leads to a better result. Qualitatively, because it is useful to understand if the selected combination of poses actually gives additional information in the process of the identification of the parameters.
Resumo:
This paper develops a Markovian jump model to describe the fault occurrence in a manipulator robot of three joints. This model includes the changes of operation points and the probability that a fault occurs in an actuator. After a fault, the robot works as a manipulator with free joints. Based on the developed model, a comparative study among three Markovian controllers, H(2), H(infinity), and mixed H(2)/H(infinity) is presented, applied in an actual manipulator robot subject to one and two consecutive faults.
Resumo:
This paper proposes a mixed validation approach based on coloured Petri nets and 3D graphic simulation for the design of supervisory systems in manufacturing cells with multiple robots. The coloured Petri net is used to model the cell behaviour at a high level of abstraction. It models the activities of each cell component and its coordination by a supervisory system. The graphical simulation is used to analyse and validate the cell behaviour in a 3D environment, allowing the detection of collisions and the calculation of process times. The motivation for this work comes from the aeronautic industry. The automation of a fuselage assembly process requires the integration of robots with other cell components such as metrological or vision systems. In this cell, the robot trajectories are defined by the supervisory system and results from the coordination of the cell components. The paper presents the application of the approach for an aircraft assembly cell under integration in Brazil. This case study shows the feasibility of the approach and supports the discussion of its main advantages and limits. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Ecological niche modelling combines species occurrence points with environmental raster layers in order to obtain models for describing the probabilistic distribution of species. The process to generate an ecological niche model is complex. It requires dealing with a large amount of data, use of different software packages for data conversion, for model generation and for different types of processing and analyses, among other functionalities. A software platform that integrates all requirements under a single and seamless interface would be very helpful for users. Furthermore, since biodiversity modelling is constantly evolving, new requirements are constantly being added in terms of functions, algorithms and data formats. This evolution must be accompanied by any software intended to be used in this area. In this scenario, a Service-Oriented Architecture (SOA) is an appropriate choice for designing such systems. According to SOA best practices and methodologies, the design of a reference business process must be performed prior to the architecture definition. The purpose is to understand the complexities of the process (business process in this context refers to the ecological niche modelling problem) and to design an architecture able to offer a comprehensive solution, called a reference architecture, that can be further detailed when implementing specific systems. This paper presents a reference business process for ecological niche modelling, as part of a major work focused on the definition of a reference architecture based on SOA concepts that will be used to evolve the openModeller software package for species modelling. The basic steps that are performed while developing a model are described, highlighting important aspects, based on the knowledge of modelling experts. In order to illustrate the steps defined for the process, an experiment was developed, modelling the distribution of Ouratea spectabilis (Mart.) Engl. (Ochnaceae) using openModeller. As a consequence of the knowledge gained with this work, many desirable improvements on the modelling software packages have been identified and are presented. Also, a discussion on the potential for large-scale experimentation in ecological niche modelling is provided, highlighting opportunities for research. The results obtained are very important for those involved in the development of modelling tools and systems, for requirement analysis and to provide insight on new features and trends for this category of systems. They can also be very helpful for beginners in modelling research, who can use the process and the experiment example as a guide to this complex activity. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Zinc fingers (ZnFs) are generally regarded as DNA-binding motifs. However, a number of recent reports have implicated particular ZnFs in the mediation of protein-protein interactions. The N-terminal ZnF of GATA-1 (NF) is one such finger, having been shown to interact with a number of other proteins, including the recently discovered transcriptional co-factor FOG. Here we solve the three-dimensional structure of the NF in solution using multidimensional H-1/N-15 NMR spectroscopy, and we use H-1/N-15 spin relation measurements to investigate its backbone dynamics. The structure consists of two distorted beta-hairpins and a single alpha-helix, and is similar to that of the C-terminal ZnF of chicken GATA-1. Comparisons of the NF structure with those of other C-4-type zinc binding motifs, including hormone receptor and LIM domains, also reveal substantial structural homology. Finally, we use the structure to map the spatial locations of NF residues shown by mutagenesis to be essential for FOG binding, and demonstrate that these residues all lie on a single face of the NE Notably, this face is well removed from the putative DNA-binding face of the NE an observation which is suggestive of simultaneous roles for the NF; that is, stabilisation of GATA-1 DNA complexes and recruitment of FOG to GATA-1-controlled promoter regions.
Resumo:
The tissue distribution kinetics of a highly bound solute, propranolol, was investigated in a heterogeneous organ, the isolated perfused limb, using the impulse-response technique and destructive sampling. The propranolol concentration in muscle, skin, and fat as well as in outflow perfusate was measured up to 30 min after injection. The resulting data were analysed assuming (1) vascular, muscle, skin and fat compartments as well mixed (compartmental model) and (2) using a distributed-in-space model which accounts for the noninstantaneous intravascular mixing and tissue distribution processes but consists only of a vascular and extravascular phase (two-phase model). The compartmental model adequately described propranolol concentration-time data in the three tissue compartments and the outflow concentration-time curve (except of the early mixing phase). In contrast, the two-phase model better described the outflow concentration-time curve but is limited in accounting only for the distribution kinetics in the dominant tissue, the muscle. The two-phase model well described the time course of propranolol concentration in muscle tissue, with parameter estimates similar to those obtained with the compartmental model. The results suggest, first that the uptake kinetics of propranolol into skin and fat cannot be analysed on the basis of outflow data alone and, second that the assumption of well-mixed compartments is a valid approximation from a practical point of view las, e.g., in physiological based pharmacokinetic modelling). The steady-state distribution volumes of skin and fat were only 16 and 4%, respectively, of that of muscle tissue (16.7 ml), with higher partition coefficient in fat (6.36) than in skin (2.64) and muscle (2.79. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Background We present a method (The CHD Prevention Model) for modelling the incidence of fatal and nonfatal coronary heart disease (CHD) within various CHD risk percentiles of an adult population. The model provides a relatively simple tool for lifetime risk prediction for subgroups within a population. It allows an estimation of the absolute primary CHD risk in different populations and will help identify subgroups of the adult population where primary CHD prevention is most appropriate and cost-effective. Methods The CHD risk distribution within the Australian population was modelled, based on the prevalence of CHD risk, individual estimates of integrated CHD risk, and current CHD mortality rates. Predicted incidence of first fatal and nonfatal myocardial infarction within CHD risk strata of the Australian population was determined. Results Approximately 25% of CHD deaths were predicted to occur amongst those in the top 10 percentiles of integrated CHD risk, regardless of age group or gender. It was found that while all causes survival did not differ markedly between percentiles of CHD risk before the ages of around 50-60, event-free survival began visibly to differ about 5 years earlier. Conclusions The CHD Prevention Model provides a means of predicting future CHD incidence amongst various strata of integrated CHD risk within an adult population. It has significant application both in individual risk counselling and in the identification of subgroups of the population where drug therapy to reduce CHD risk is most cost-effective. J Cardiovasc Risk 8:31-37 (C) 2001 Lippincott Williams & Wilkins.
Resumo:
Human hypoxanthine-guanine phosphoribosyltransferase (HGPRT) catalyses the synthesis of the purine nucleoside monophosphates, IMP and GMP, by the addition of a 6-oxopurine base, either hypoxanthine or guanine, to the 1-beta-position of 5-phospho-U-D-ribosyl-1-pyrophosphate (PRib-PP). The mechanism is sequential, with PRib-PP binding to the free enzyme prior to the base. After the covalent reaction, pyrophosphate is released followed by the nucleoside monophosphate. A number of snapshots of the structure of this enzyme along the reaction pathway have been captured. These include the structure in the presence of the inactive purine base analogue, 7-hydroxy [4,3-d] pyrazolo pyrimidine (HPP) and PRib-PP. Mg2+, and in complex with IMP or GMP. The third structure is that of the immucillinHP.Mg2+.PPi complex, a transition-state analogue. Here, the first crystal structure of free human HGPRT is reported to 1.9 angstrom resolution, showing that significant conformational changes have to occur for the substrate(s) to bind and for catalysis to proceed. Included in these changes are relative movement of subunits within the tetramer, rotation and extension of an active-site alpha-helix (D137-D153), reorientation of key active-site residues K68, D137 and K165, and the rearrangement of three active-site loops (100-128, 165-173 and 186-196). Toxoplasina gondii HGXPRT is the only other 6-oxopurine phosphoribosyltransferase structure solved in the absence of ligands. Comparison of this structure with human HGPRT reveals significant differences in the two active sites, including the structure of the flexible loop containing K68 (human) or K79 (T gondii). (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Background: Versutoxin (delta-ACTX-Hv1) is the major component of the venom of the Australian Blue Mountains funnel web spider, Hadronyche versuta. delta-ACTX-Hv1 produces potentially fatal neurotoxic symptoms in primates by slowing the inactivation of voltage-gated sodium channels; delta-ACTX-Hv1 is therefore a useful tool for studying sodium channel function. We have determined the three-dimensional structure of delta ACTX-Hv1 as the first step towards understanding the molecular basis of its interaction with these channels. Results: The solution structure of delta-ACTX-Hv1, determined using NMR spectroscopy, comprises a core beta region containing a triple-stranded antiparallel beta sheet, a thumb-like extension protruding from the beta region and a C-terminal 3(10) helix that is appended to the beta domain by virtue of a disulphide bond. The beta region contains a cystine knot motif similar to that seen in other neurotoxic polypeptides. The structure shows homology with mu-agatoxin-l, a spider toxin that also modifies the inactivation kinetics of vertebrate voltage-gated sodium channels. More surprisingly, delta-ACTX-Hv1 shows both sequence and structural homology with gurmarin, a plant polypeptide. This similarity leads us to suggest that the sweet-taste suppression elicited by gurmarin may result from an interaction with one of the downstream ion channels involved in sweet-taste transduction. Conclusions: delta-ACTX-Hv1 shows no structural homology with either sea anemone or alpha-scorpion toxins, both of which also modify the inactivation kinetics of voltage-gated sodium channels by interacting with channel recognition site 3. However, we have shown that delta-ACTX-Hv1 contains charged residues that are topologically related to those implicated in the binding of sea anemone and alpha-scorpion toxins to mammalian voltage-gated sodium channels, suggesting similarities in their mode of interaction with these channels.
Resumo:
To date there have been few quantitative studies of the distribution of, and relative habitat utilisation by, koalas in the mulgalands of Queensland. To examine these parameters we applied habitat-accessibility and relative habitat-utilisation indices to estimates of faecal pellet density sampled at 149 sites across the region. Modelling the presence of pellets using logistic regression showed that the potential range of accessible habitats and relative habitat use varied greatly across the region, with rainfall being probably the most important determinant of distribution. Within that distribution, landform and rainfall were both important factors affecting habitat preference. Modelling revealed vastly different probabilities of finding a pellet under trees depending on the tree species, canopy size, and location within the region.
Resumo:
The power required to operate large mills is typically 5-10 MW. Hence, optimisation of power consumption will have a significant impact on overall economic performance and environmental impact. Power draw modelling results using the discrete element code PFC3D have been compared with results derived from the widely used empirical Model of Morrell. This is achieved by calculating the power draw for a range of operating conditions for constant mill size and fill factor using two modelling approaches. fThe discrete element modelling results show that, apart from density, selection of the appropriate material damping ratio is critical for the accuracy of modelling of the mill power draw. The relative insensitivity of the power draw to the material stiffness allows selection of moderate stiffness values, which result in acceptable computation time. The results obtained confirm that modelling of the power draw for a vertical slice of the mill, of thickness 20% of the mill length, is a reliable substitute for modelling the full mill. The power draw predictions from PFC3D show good agreement with those obtained using the empirical model. Due to its inherent flexibility, power draw modelling using PFC3D appears to be a viable and attractive alternative to empirical models where necessary code and computer power are available.