80 resultados para Curvilinear coordinates.
Resumo:
Previous work has shown that amplitude and direction are two independently controlled parameters of aimed arm movements, and performance, therefore, suffers when they must be decomposed into Cartesian coordinates. We now compare decomposition into different coordinate systems. Subjects pointed at visual targets in 2-D with a cursor, using a two-axis joystick or two single-axis joysticks. In the latter case, joystick axes were aligned with the subjects’ body axes, were rotated by –45°, or were oblique (i.e., one axis was in an egocentric frame and the other was rotated by –45°). Cursor direction always corresponded to joystick direction. We found that compared with the two-axis joystick, responses with single-axis joysticks were slower and less accurate when the axes were oriented egocentrically; the deficit was even more pronounced when the axes were rotated and was most pronounced when they were oblique. This confirms that decomposition of motor commands is computationally demanding and documents that this demand is lowest for egocentric, higher for rotated, and highest for oblique coordinates. We conclude that most current vehicles use computationally demanding man–machine interfaces.
Resumo:
This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.
Resumo:
This paper presents a formulation of image-based visual servoing (IBVS) for a spherical camera where coordinates are parameterized in terms of colatitude and longitude: IBVSSph. The image Jacobian is derived and simulation results are presented for canonical rotational, translational as well as general motion. Problems with large rotations that affect the planar perspective form of IBVS are not present on the sphere, whereas the desirable robustness properties of IBVS are shown to be retained. We also describe a structure from motion (SfM) system based on camera-centric spherical coordinates and show how a recursive estimator can be used to recover structure. The spherical formulations for IBVS and SfM are particularly suitable for platforms, such as aerial and underwater robots, that move in SE(3).
Resumo:
This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network. A simple handheld device, using this same environmental infrastructure, is used to guide humans.
Resumo:
This paper proposes a train movement model with fixed runtime that can be employed to find feasible control strategies for a single train along an inter-city railway line. The objective of the model is to minimize arrival delays at each station along railway lines. However, train movement is a typical nonlinear problem for complex running environments and different requirements. A heuristic algorithm is developed to solve the problem in this paper and the simulation results show that the train could overcome the disturbance from train delay and coordinates the operation strategies to sure punctual arrival of trains at the destination. The developed algorithm can also be used to evaluate the running reliability of trains in scheduled timetables.
Resumo:
Many cities worldwide face the prospect of major transformation as the world moves towards a global information order. In this new era, urban economies are being radically altered by dynamic processes of economic and spatial restructuring. The result is the creation of ‘informational cities’ or its new and more popular name, ‘knowledge cities’. For the last two centuries, social production had been primarily understood and shaped by neo-classical economic thought that recognized only three factors of production: land, labor and capital. Knowledge, education, and intellectual capacity were secondary, if not incidental, factors. Human capital was assumed to be either embedded in labor or just one of numerous categories of capital. In the last decades, it has become apparent that knowledge is sufficiently important to deserve recognition as a fourth factor of production. Knowledge and information and the social and technological settings for their production and communication are now seen as keys to development and economic prosperity. The rise of knowledge-based opportunity has, in many cases, been accompanied by a concomitant decline in traditional industrial activity. The replacement of physical commodity production by more abstract forms of production (e.g. information, ideas, and knowledge) has, however paradoxically, reinforced the importance of central places and led to the formation of knowledge cities. Knowledge is produced, marketed and exchanged mainly in cities. Therefore, knowledge cities aim to assist decision-makers in making their cities compatible with the knowledge economy and thus able to compete with other cities. Knowledge cities enable their citizens to foster knowledge creation, knowledge exchange and innovation. They also encourage the continuous creation, sharing, evaluation, renewal and update of knowledge. To compete nationally and internationally, cities need knowledge infrastructures (e.g. universities, research and development institutes); a concentration of well-educated people; technological, mainly electronic, infrastructure; and connections to the global economy (e.g. international companies and finance institutions for trade and investment). Moreover, they must possess the people and things necessary for the production of knowledge and, as importantly, function as breeding grounds for talent and innovation. The economy of a knowledge city creates high value-added products using research, technology, and brainpower. Private and the public sectors value knowledge, spend money on its discovery and dissemination and, ultimately, harness it to create goods and services. Although many cities call themselves knowledge cities, currently, only a few cities around the world (e.g., Barcelona, Delft, Dublin, Montreal, Munich, and Stockholm) have earned that label. Many other cities aspire to the status of knowledge city through urban development programs that target knowledge-based urban development. Examples include Copenhagen, Dubai, Manchester, Melbourne, Monterrey, Singapore, and Shanghai. Knowledge-Based Urban Development To date, the development of most knowledge cities has proceeded organically as a dependent and derivative effect of global market forces. Urban and regional planning has responded slowly, and sometimes not at all, to the challenges and the opportunities of the knowledge city. That is changing, however. Knowledge-based urban development potentially brings both economic prosperity and a sustainable socio-spatial order. Its goal is to produce and circulate abstract work. The globalization of the world in the last decades of the twentieth century was a dialectical process. On one hand, as the tyranny of distance was eroded, economic networks of production and consumption were constituted at a global scale. At the same time, spatial proximity remained as important as ever, if not more so, for knowledge-based urban development. Mediated by information and communication technology, personal contact, and the medium of tacit knowledge, organizational and institutional interactions are still closely associated with spatial proximity. The clustering of knowledge production is essential for fostering innovation and wealth creation. The social benefits of knowledge-based urban development extend beyond aggregate economic growth. On the one hand is the possibility of a particularly resilient form of urban development secured in a network of connections anchored at local, national, and global coordinates. On the other hand, quality of place and life, defined by the level of public service (e.g. health and education) and by the conservation and development of the cultural, aesthetic and ecological values give cities their character and attract or repel the creative class of knowledge workers, is a prerequisite for successful knowledge-based urban development. The goal is a secure economy in a human setting: in short, smart growth or sustainable urban development.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.
Resumo:
Maps are used to represent three-dimensional space and are integral to a range of everyday experiences. They are increasingly used in mathematics, being prominent both in school curricula and as a form of assessing students understanding of mathematics ideas. In order to successfully interpret maps, students need to be able to understand that maps: represent space, have their own perspective and scale, and their own set of symbols and texts. Despite the fact that maps have an increased prevalence in society and school, there is evidence to suggest that students have difficulty interpreting maps. This study investigated 43 primary-aged students’ (aged 9-12 years) verbal and gestural behaviours as they engaged with and solved map tasks. Within a multiliteracies framework that focuses on spatial, visual, linguistic, and gestural elements, the study investigated how students interpret map tasks. Specifically, the study sought to understand students’ skills and approaches used to solving map tasks and the gestural behaviours they utilised as they engaged with map tasks. The investigation was undertaken using the Knowledge Discovery in Data (KDD) design. The design of this study capitalised on existing research data to carry out a more detailed analysis of students’ interpretation of map tasks. Video data from an existing data set was reorganised according to two distinct episodes—Task Solution and Task Explanation—and analysed within the multiliteracies framework. Content Analysis was used with these data and through anticipatory data reduction techniques, patterns of behaviour were identified in relation to each specific map task by looking at task solution, task correctness and gesture use. The findings of this study revealed that students had a relatively sound understanding of general mapping knowledge such as identifying landmarks, using keys, compass points and coordinates. However, their understanding of mathematical concepts pertinent to map tasks including location, direction, and movement were less developed. Successful students were able to interpret the map tasks and apply relevant mathematical understanding to navigate the spatial demands of the map tasks while the unsuccessful students were only able to interpret and understand basic map conventions. In terms of their gesture use, the more difficult the task, the more likely students were to exhibit gestural behaviours to solve the task. The most common form of gestural behaviour was deictic, that is a pointing gesture. Deictic gestures not only aided the students capacity to explain how they solved the map tasks but they were also a tool which assisted them to navigate and monitor their spatial movements when solving the tasks. There were a number of implications for theory, learning and teaching, and test and curriculum design arising from the study. From a theoretical perspective, the findings of the study suggest that gesturing is an important element of multimodal engagement in mapping tasks. In terms of teaching and learning, implications include the need for students to utilise gesturing techniques when first faced with new or novel map tasks. As students become more proficient in solving such tasks, they should be encouraged to move beyond a reliance on such gesture use in order to progress to more sophisticated understandings of map tasks. Additionally, teachers need to provide students with opportunities to interpret and attend to multiple modes of information when interpreting map tasks.
Resumo:
We present three competing predictions of the organizational gender diversity-performance relationship: a positive linear prediction, a negative linear prediction, and an inverted U-shaped curvilinear prediction. The paper also proposes a moderating effect of industry type (services vs. manufacturing). The predictions were tested using archival quantitative data with a longitudinal design. The results show partial support for the positive linear and inverted U-shaped curvilinear predictions as well as for the proposed moderating effect of industry type. The results help reconcile the inconsistent findings of past research. The findings also show that industry context can strengthen or weaken gender diversity effects.
Resumo:
Research on workforce diversity at the organisational level gained momentum in the 1990s, because of the growing trend in HR research to link HR practices with organisational performance. The new parallel wave of research focused on the business case for diversity, in which diversity was linked to organisational performance. However, the results of these studies, mainly focusing on linear diversity-performance relationships, have been inconsistent. Based on contrasting theories, this paper proposes three competing predictions of the gender diversity-performance relationship at the organisational level: a positive linear relationship derived from the resource-based view of the firm, a negative linear relationship derived from self-categorisation and social identity theories, and a U-shaped curvilinear relationship derived from the integration of the resource-based view of the firm with self-categorisation and social identity theories. The U-shaped relationship accounts for the inconsistent findings in past research, because different proportions of men and women produce different social dynamics that have different effects on organisational performance. Further, the proposed U-shaped relationship can have different slopes in the manufacturing and services industries. The paper contributes to the field of diversity by strengthening its weak theoretical foundations and by highlighting the industry differences.
Resumo:
A solvothermal route for the preparation of crystalline state lithium niobate using Li2 CO3 and Nb2 O5 is developed. Oxalic acid is employed as solvent, which coordinates with niobium oxide to stimulate the main reaction. Scanning electron microscopy images show that the as-prepared sample displays a cubic morphology. X-ray diffraction and IR spectrum of the as-prepared sample indicate that the sample is well crystalline.
Resumo:
To analyse mechanotransduction resulting from tensile loading under defined conditions, various devices for in vitro cell stimulation have been developed. This work aimed to determine the strain distribution on the membrane of a commercially available device and its consistency with rising cycle numbers, as well as the amount of strain transferred to adherent cells. The strains and their behaviour within the stimulation device were determined using digital image correlation (DIC). The strain transferred to cells was measured on eGFP-transfected bone marrow-derived cells imaged with a fluorescence microscope. The analysis was performed by determining the coordinates of prominent positions on the cells, calculating vectors between the coordinates and their length changes with increasing applied tensile strain. The stimulation device was found to apply homogeneous (mean of standard deviations approx. 2% of mean strain) and reproducible strains in the central well area. However, on average, only half of the applied strain was transferred to the bone marrow-derived cells. Furthermore, the strain measured within the device increased significantly with an increasing number of cycles while the membrane's Young's modulus decreased, indicating permanent changes in the material during extended use. Thus, strain magnitudes do not match the system readout and results require careful interpretation, especially at high cycle numbers.
Resumo:
Empirical findings on the link between gender diversity and performance have been inconsistent. This paper presents three competing predictions of the organizational gender diversity-performance relationship: a positive linear prediction derived from the resource-based view of the firm, a negative linear prediction derived from self-categorization and social identity theories, and an inverted U-shaped curvilinear prediction derived from the integration of the resource-based view of the firm with self-categorization and social identity theories. This paper also proposes a moderating effect of industry type (services vs. manufacturing) on the gender diversity-performance relationship. The predictions were tested in publicly listed Australian organizations using archival quantitative data with a longitudinal research design. The results show partial support for the positive linear and inverted U-shaped curvilinear predictions as well as for the proposed moderating effect of industry type. The curvilinear relationship indicates that different proportions of organizational gender diversity have different effects on organizational performance, which may be attributed to different dynamics as suggested by the resource-based view and self-categorization and social identity theories. The results help reconcile the inconsistent findings of past research that focused on the linear gender diversity-performance relationship. The findings also show that industry context can strengthen or weaken the effects of organizational gender diversity on performance.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Single particle analysis (SPA) coupled with high-resolution electron cryo-microscopy is emerging as a powerful technique for the structure determination of membrane protein complexes and soluble macromolecular assemblies. Current estimates suggest that ∼104–105 particle projections are required to attain a 3 Å resolution 3D reconstruction (symmetry dependent). Selecting this number of molecular projections differing in size, shape and symmetry is a rate-limiting step for the automation of 3D image reconstruction. Here, we present SwarmPS, a feature rich GUI based software package to manage large scale, semi-automated particle picking projects. The software provides cross-correlation and edge-detection algorithms. Algorithm-specific parameters are transparently and automatically determined through user interaction with the image, rather than by trial and error. Other features include multiple image handling (∼102), local and global particle selection options, interactive image freezing, automatic particle centering, and full manual override to correct false positives and negatives. SwarmPS is user friendly, flexible, extensible, fast, and capable of exporting boxed out projection images, or particle coordinates, compatible with downstream image processing suites.