429 resultados para Cartesian
Resumo:
Previous work has shown that amplitude and direction are two independently controlled parameters of aimed arm movements, and performance, therefore, suffers when they must be decomposed into Cartesian coordinates. We now compare decomposition into different coordinate systems. Subjects pointed at visual targets in 2-D with a cursor, using a two-axis joystick or two single-axis joysticks. In the latter case, joystick axes were aligned with the subjects’ body axes, were rotated by –45°, or were oblique (i.e., one axis was in an egocentric frame and the other was rotated by –45°). Cursor direction always corresponded to joystick direction. We found that compared with the two-axis joystick, responses with single-axis joysticks were slower and less accurate when the axes were oriented egocentrically; the deficit was even more pronounced when the axes were rotated and was most pronounced when they were oblique. This confirms that decomposition of motor commands is computationally demanding and documents that this demand is lowest for egocentric, higher for rotated, and highest for oblique coordinates. We conclude that most current vehicles use computationally demanding man–machine interfaces.
Resumo:
This paper describes an autonomous docking system and web interface that allows long-term unaided use of a sophisticated robot by untrained web users. These systems have been applied to the biologically inspired RatSLAM system as a foundation for testing both its long-term stability and its practicality. While docking and web interface systems already exist, this system allows for a significantly larger margin of error in docking accuracy due to the mechanical design, thereby increasing robustness against navigational errors. Also a standard vision sensor is used for both long-range and short-range docking, compared to the many systems that require both omni-directional cameras and high resolution Laser range finders for navigation. The web interface has been designed to accommodate the significant delays experienced on the Internet, and to facilitate the non- Cartesian operation of the RatSLAM system.
Resumo:
Visual servoing has been a viable method of robot manipulator control for more than a decade. Initial developments involved positionbased visual servoing (PBVS), in which the control signal exists in Cartesian space. The younger method, image-based visual servoing (IBVS), has seen considerable development in recent years. PBVS and IBVS offer tradeoffs in performance, and neither can solve all tasks that may confront a robot. In response to these issues, several methods have been devised that partition the control scheme, allowing some motions to be performed in the manner of a PBVS system, while the remaining motions are performed using an IBVS approach. To date, there has been little research that explores the relative strengths and weaknesses of these methods. In this paper we present such an evaluation. We have chosen three recent visual servo approaches for evaluation in addition to the traditional PBVS and IBVS approaches. We posit a set of performance metrics that measure quantitatively the performance of a visual servo controller for a specific task. We then evaluate each of the candidate visual servo methods for four canonical tasks with simulations and with experiments in a robotic work cell.
Resumo:
This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network. A simple handheld device, using this same environmental infrastructure, is used to guide humans.
Resumo:
This paper presents a method of recovering the 6 DoF pose (Cartesian position and angular rotation) of a range sensor mounted on a mobile platform. The method utilises point targets in a local scene and optimises over the error between their absolute position and their apparent position as observed by the range sensor. The analysis includes an investigation into the sensitivity and robustness of the method. Practical results were collected using a SICK LRS2100 mounted on a P&H electric mining shovel and present the errors in scan data relative to an independent 3D scan of the scene. A comparison to directly measuring the sensor pose is presented and shows the significant accuracy improvements in scene reconstruction using this pose estimation method.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
We treat two related moving boundary problems. The first is the ill-posed Stefan problem for melting a superheated solid in one Cartesian coordinate. Mathematically, this is the same problem as that for freezing a supercooled liquid, with applications to crystal growth. By applying a front-fixing technique with finite differences, we reproduce existing numerical results in the literature, concentrating on solutions that break down in finite time. This sort of finite-time blow-up is characterised by the speed of the moving boundary becoming unbounded in the blow-up limit. The second problem, which is an extension of the first, is proposed to simulate aspects of a particular two-phase Stefan problem with surface tension. We study this novel moving boundary problem numerically, and provide results that support the hypothesis that it exhibits a similar type of finite-time blow-up as the more complicated two-phase problem. The results are unusual in the sense that it appears the addition of surface tension transforms a well-posed problem into an ill-posed one.
Resumo:
Everything (2008) is a looped 3 channel digital video (extracted from a 3D computer animation) that appropriates a range of media including photography, drawing, painting, and pre-shot video. The work departs from traditional time-based video which is generally based on a recording of an external event. Instead, “Everything” constructs an event and space more like a painting or drawing might. The works combines constructed events (including space, combinations of objects, and aesthetic relationship of forms) with pre-recorded video footage and pre-made paintings and drawings. The result is a montage of objects, images – both still and moving – and abstracted ‘painterly’ gestures. This technique creates a complex temporal displacement. 'Past' refers to pre-recorded media such as painting and photography, and 'future' refers to a possible virtual space not in the present, that these objects may occupy together. Through this simultaneity between the real and the virtual, the work comments on a disembodied sense of space and time, while also puncturing the virtual with a sense of materiality through the tactility of drawing and painting forms and processes. In so doing, te work challenges the perspectival Cartesian space synonymous with the virtual. In this work the disembodied wandering virtual eye is met with an uncanny combination of scenes, where scale and the relationships between objects are disrupted and changed. Everything is one of the first international examples of 3D animation technology being utilised in contemporary art. The work won the inaugural $75,000 Premier of Queensland National New Media Art Award and was subsequently acquired by the Queensland Art Gallery. The work has been exhibited and reviewed nationally and internationally.
Resumo:
Tabernacle is an experimental game world-building project which explores the relationship between the map and the 3-dimensional visualisation enabled by high-end game engines. The project is named after the 6th century tabernacle maps of Cosmas Indicopleustes in his Christian Topography. These maps articulate a cultural or metaphoric, rather than measured view of the world, contravening Alper's distinction which observes that “maps are measurement, art is experience”. The project builds on previous research into the use of game engines and 3D navigable representation to enable cultural experience, particularly non-Western cultural experiences and ways of seeing. Like the earlier research, Tabernacle highlights the problematic disjuncture between the modern Cartesian map structures of the engine and the mapping traditions of non-Western cultures. Tabernacle represents a practice-based research provocation. The project exposes assumptions about the maps which underpin 3D game worlds, and the autocratic tendencies of world construction software. This research is of critical importance as game engines and simulation technologies are becoming more popular in the recreation of culture and history. A key learning from the Tabernacle project was the ways in which available game engines – technologies with roots in the Enlightenment - constrained the team’s ability to represent a very different culture with a different conceptualisation of space and maps. Understanding the cultural legacies of the software itself is critical as we are tempted by the opportunities for representation of culture and history that they seem to offer. The project was presented at Perth Digital Arts and Culture in 2007 and reiterated using a different game engine in 2009. Further reflections were discussed in a conference paper presented at OZCHI 2009 and a peer-reviewed journal article, and insights gained from the experience continue to inform the author’s research.
Resumo:
The extraordinary event, for Deleuze, is the object becoming subject – not in the manner of an abstract formulation, such as the substitution of one ideational representation for another but, rather, in the introduction of a vast, new, impersonal plane of subjectivity, populated by object processes and physical phenomena that in Deleuze’s discovery will be shown to constitute their own subjectivities. Deleuze’s polemic of subjectivity (the refusal of the Cartesian subject and the transcendental ego of Husserl) – long attempted by other thinkers – is unique precisely because it heralds the dawning of a new species of objecthood that will qualify as its own peculiar subjectivity. A survey of Deleuze’s early work on subjectivity, Empirisme et subjectivité (Deleuze 1953), Le Bergsonisme (Deleuze 1968), and Logique du sens (Deleuze 1969), brings the architectural reader into a peculiar confrontation with what Deleuze calls the ‘new transcendental field’, the field of subjectproducing effects, which for the philosopher takes the place of both the classical and modern subject. Deleuze’s theory of consciousness and perception is premised on the critique of Husserlian phenomenology; and ipso facto his question is an architectural problematic, even if the name ‘architecture’ is not invoked...
Resumo:
Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.
Resumo:
We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot.
Resumo:
In Social Science (Organization Studies, Economics, Management Science, Strategy, International Relations, Political Science…) the quest for addressing the question “what is a good practitioner?” has been around for centuries, with the underlying assumptions that good practitioners should lead organizations to higher levels of performance. Hence to ask “what is a good “captain”?” is not a new question, we should add! (e.g. Tsoukas & Cummings, 1997, p. 670; Söderlund, 2004, p. 190). This interrogation leads to consider problems such as the relations between dichotomies Theory and Practice, rigor and relevance of research, ways of knowing and knowledge forms. On the one hand we face the “Enlightenment” assumptions underlying modern positivist Social science, grounded in “unity-of-science dream of transforming and reducing all kinds of knowledge to one basic form and level” and cause-effects relationships (Eikeland, 2012, p. 20), and on the other, the postmodern interpretivist proposal, and its “tendency to make all kinds of knowing equivalent” (Eikeland, 2012, p. 20). In the project management space, this aims at addressing one of the fundamental problems in the field: projects still do not deliver their expected benefits and promises and therefore the socio-economical good (Hodgson & Cicmil, 2007; Bredillet, 2010, Lalonde et al., 2012). The Cartesian tradition supporting projects research and practice for the last 60 years (Bredillet, 2010, p. 4) has led to the lack of relevance to practice of the current conceptual base of project management, despite the sum of research, development of standards, best & good practices and the related development of project management bodies of knowledge (Packendorff, 1995, p. 319-323; Cicmil & Hodgson, 2006, p. 2–6, Hodgson & Cicmil, 2007, p. 436–7; Winter et al., 2006, p. 638). Referring to both Hodgson (2002) and Giddens (1993), we could say that “those who expect a “social-scientific Newton” to revolutionize this young field “are not only waiting for a train that will not arrive, but are in the wrong station altogether” (Hodgson, 2002, p. 809; Giddens, 1993, p. 18). While, in the postmodern stream mainly rooted in the “practice turn” (e.g. Hällgren & Lindahl, 2012), the shift from methodological individualism to social viscosity and the advocated pluralism lead to reinforce the “functional stupidity” (Alvesson & Spicer, 2012, p. 1194) this postmodern stream aims at overcoming. We suggest here that addressing the question “what is a good PM?” requires a philosophy of practice perspective to complement the “usual” philosophy of science perspective. The questioning of the modern Cartesian tradition mirrors a similar one made within Social science (Say, 1964; Koontz, 1961, 1980; Menger, 1985; Warry, 1992; Rothbard, 1997a; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Boisot & McKelvey, 2010), calling for new thinking. In order to get outside the rationalist ‘box’, Toulmin (1990, p. 11), along with Tsoukas & Cummings (1997, p. 655), suggests a possible path, summarizing the thoughts of many authors: “It can cling to the discredited research program of the purely theoretical (i.e. “modern”) philosophy, which will end up by driving it out of business: it can look for new and less exclusively theoretical ways of working, and develop the methods needed for a more practical (“post-modern”) agenda; or it can return to its pre-17th century traditions, and try to recover the lost (“pre-modern”) topics that were side-tracked by Descartes, but can be usefully taken up for the future” (Toulmin, 1990, p. 11). Thus, paradoxically and interestingly, in their quest for the so-called post-modernism, many authors build on “pre-modern” philosophies such as the Aristotelian one (e.g. MacIntyre, 1985, 2007; Tsoukas & Cummings, 1997; Flyvbjerg, 2001; Blomquist et al., 2010; Lalonde et al., 2012). It is perhaps because the post-modern stream emphasizes a dialogic process restricted to reliance on voice and textual representation, it limits the meaning of communicative praxis, and weaking the practice because it turns away attention from more fundamental issues associated with problem-definition and knowledge-for-use in action (Tedlock, 1983, p. 332–4; Schrag, 1986, p. 30, 46–7; Warry, 1992, p. 157). Eikeland suggests that the Aristotelian “gnoseology allows for reconsidering and reintegrating ways of knowing: traditional, practical, tacit, emotional, experiential, intuitive, etc., marginalised and considered insufficient by modernist [and post-modernist] thinking” (Eikeland, 2012, p. 20—21). By contrast with the modernist one-dimensional thinking and relativist and pluralistic post-modernism, we suggest, in a turn to an Aristotelian pre-modern lens, to re-conceptualise (“re” involving here a “re”-turn to pre-modern thinking) the “do” and to shift the perspective from what a good PM is (philosophy of science lens) to what a good PM does (philosophy of practice lens) (Aristotle, 1926a). As Tsoukas & Cummings put it: “In the Aristotelian tradition to call something good is to make a factual statement. To ask, for example, ’what is a good captain’?’ is not to come up with a list of attributes that good captains share (as modem contingency theorists would have it), but to point out the things that those who are recognized as good captains do.” (Tsoukas & Cummings, 1997, p. 670) Thus, this conversation offers a dialogue and deliberation about a central question: What does a good project manager do? The conversation is organized around a critic of the underlying assumptions supporting the modern, post-modern and pre-modern relations to ways of knowing, forms of knowledge and “practice”.
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.