90 resultados para Resolution in azimuth direction
Resumo:
With the release of the Nintendo Wii in 2006, the use of haptic force gestures has become a very popular form of input for interactive entertainment. However, current gesture recognition techniques utilised in Nintendo Wii games fall prey to a lack of control when it comes to recognising simple gestures. This paper presents a simple gesture recognition technique called Peak Testing which gives greater control over gesture interaction. This recognition technique locates force peaks in continuous force data (provided by a gesture device such as the Wiimote) and then cancels any peaks which are not meant for input. Peak Testing is therefore technically able to identify movements in any direction. This paper applies this recognition technique to control virtual instruments and investigates how users respond to this interaction. The technique is then explored as the basis for a robust way to navigate menus with a simple flick of the wrist. We propose that this flick-form of interaction could be a very intuitive way to navigate Nintendo Wii menus instead of the current pointer techniques implemented.
Resumo:
Directors’ and executives’ remuneration, including levels of pay, accountability and transparency, is controversial. Section 250R of the CLERP (Audit Reform & Disclosure) Act 2004 that was not greatly anticipated, requires the holding of a non-binding resolution on board remuneration at companies’ annual general meetings. The reform has been criticised on the basis that, inter alia, it blurs the respective roles of shareholders and directors. This article identifies possible motivations for the imposition of the non-binding resolution in Australia. These are evaluated with reference to sources of corporate governance policy and the current state of Australia’s relevant corporate governance structures. We speculate that the non-binding vote will not amount to a substantive addition to the corporate governance regime.
Resumo:
For many years, computer vision has lured researchers with promises of a low-cost, passive, lightweight and information-rich sensor suitable for navigation purposes. The prime difficulty in vision-based navigation is that the navigation solution will continually drift with time unless external information is available, whether it be cues from the appearance of the scene, a map of features (whether built online or known a priori), or from an externally-referenced sensor. It is not merely position that is of interest in the navigation problem. Attitude (i.e. the angular orientation of a body with respect to a reference frame) is integral to a visionbased navigation solution and is often of interest in its own right (e.g. flight control). This thesis examines vision-based attitude estimation in an aerospace environment, and two methods are proposed for constraining drift in the attitude solution; one through a novel integration of optical flow and the detection of the sky horizon, and the other through a loosely-coupled integration of Visual Odometry and GPS position measurements. In the first method, roll angle, pitch angle and the three aircraft body rates are recovered though a novel method of tracking the horizon over time and integrating the horizonderived attitude information with optical flow. An image processing front-end is used to select several candidate lines in a image that may or may not correspond to the true horizon, and the optical flow is calculated for each candidate line. Using an Extended Kalman Filter (EKF), the previously estimated aircraft state is propagated using a motion model and a candidate horizon line is associated using a statistical test based on the optical flow measurements and location of the horizon in the image. Once associated, the selected horizon line, along with the associated optical flow, is used as a measurement to the EKF. To evaluate the accuracy of the algorithm, two flights were conducted, one using a highly dynamic Uninhabited Airborne Vehicle (UAV) in clear flight conditions and the other in a human-piloted Cessna 172 in conditions where the horizon was partially obscured by terrain, haze and smoke. The UAV flight resulted in pitch and roll error standard deviations of 0.42° and 0.71° respectively when compared with a truth attitude source. The Cessna 172 flight resulted in pitch and roll error standard deviations of 1.79° and 1.75° respectively. In the second method for estimating attitude, a novel integrated GPS/Visual Odometry (GPS/VO) navigation filter is proposed, using a structure similar to a classic looselycoupled GPS/INS error-state navigation filter. Under such an arrangement, the error dynamics of the system are derived and a Kalman Filter is developed for estimating the errors in position and attitude. Through similar analysis to the GPS/INS problem, it is shown that the proposed filter is capable of recovering the complete attitude (i.e. pitch, roll and yaw) of the platform when subjected to acceleration not parallel to velocity for both the monocular and stereo variants of the filter. Furthermore, it is shown that under general straight line motion (e.g. constant velocity), only the component of attitude in the direction of motion is unobservable. Numerical simulations are performed to demonstrate the observability properties of the GPS/VO filter in both the monocular and stereo camera configurations. Furthermore, the proposed filter is tested on imagery collected using a Cessna 172 to demonstrate the observability properties on real-world data. The proposed GPS/VO filter does not require additional restrictions or assumptions such as platform-specific dynamics, map-matching, feature-tracking, visual loop-closing, gravity vector or additional sensors such as an IMU or magnetic compass. Since no platformspecific dynamics are required, the proposed filter is not limited to the aerospace domain and has the potential to be deployed in other platforms such as ground robots or mobile phones.
Resumo:
Online gaming environments feature a number of challenging regulatory issues; a diverse player base, uneven power relationship, and lack of real dispute resolution mechanisms. By conducting an ethnographic study of the online environment Eve Online, and using as a comparative the offshore gaming industry, I consider how we might look to regulate, and resolve disputes within, online gaming environments. In doing so, I adopted a novel approach to the study of online gaming environments - that of norms - which gave significance not only to the terms of service dictated by platform providers and their legal advisors, but also to the social and ludic limitations and affordances players constructed themselves. Finally, through an account of the evolution of regulatory mechanisms and dispute resolution in the offshore gambling industry, I consider how an environment which features much in common with online gaming environments overcame a number of these challenges within the last 10-15 years, and what lessons might be taken from those experiences and applied to contemporary online gaming environments.
Resumo:
The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.
Resumo:
The role of individual ocular tissues in mediating changes to the sclera during myopia development is unclear. The aim of this study was to examine the effects of retina, RPE and choroidal tissues from myopic and hyperopic chick eyes on the DNA and glycosaminoglycan (GAG) content in cultures of chick scleral fibroblasts. Primary cultures of fibroblastic cells expressing vimentin and -smooth muscle actin were established in serum-supplemented growth medium from 8-day-old normal chick sclera. The fibroblasts were subsequently co-cultured with posterior eye cup tissue (full thickness containing retina, RPE and choroid) obtained from untreated eyes and eyes wearing translucent diffusers (form-deprivation myopia, FDM) or -15D lenses (lens-induced myopia, LIM) for 3 days (post hatch day 5 to 8) (n=6 per treatment group). The effect of tissues (full thickness and individual retina, RPE, and choroid layers) from -15D (LIM) versus +15D (lens-induced hyperopia, LIH) treated eyes was also determined. Refraction changes in the direction predicted by the visual treatments were confirmed by retinoscopy prior to tissue collection. Glycosaminoglycan (GAG) and DNA content of the scleral fibroblast cultures were measured using GAG and PicoGreen assays. There was no significant difference in the effect of full thickness tissue from either FDM or LIM treated eyes on DNA and GAG content of scleral fibroblasts (DNA 8.9±2.6 µg and 8.4±1.1 µg, p=0.12; GAG 11.2±0.6 µg and 10.1±1.0 µg, p=0.34). Retina from LIM eyes did not alter fibroblast DNA or GAG content compared to retina from LIH eyes (DNA 27.2±1.7 µg versus 23.2±1.5 µg, p=0.21; GAG 28.1±1.7 µg versus. 28.7±1.2 µg, p=0.46). Similarly, the choroid from LIH and LIM eyes did not produce a differential effect on DNA content (DNA, LIM 46.9±6.4 versus LIH 51.5±4.7 µg, p=0.31), whereas GAG content was higher for cells in co-culture with choroid from LIH eyes (GAG 32.5±0.7 µg versus 18.9±1.2 µg, F1,6=9.210, p=0.0002). In contrast, fibroblast DNA was greater in co-culture with RPE from LIM eyes than the empty basket and DNA content less for co-culture with RPE from LIH eyes (LIM: 72.4±6.3 µg versus Empty basket: 46.03±1.0 µg; F1,6=69.99, p=0.0005 and LIH: 27.9±2.3 µg versus empty basket: 46.03±1.0 µg; p=0.0004). GAG content was higher with RPE from LIH eyes (LIH: 33.7±1.9 µg versus empty basket: 29.5±0.8 µg, F1,6=13.99, p=0.010) and lower with RPE from LIM eyes (LIM: 27.7±0.9 µg versus empty basket: 29.5±0.8 µg, p=0.021). GAG content of cells in co-culture with choroid from LIH eyes was higher compared to co-culture with choroid from LIM eyes (32.5±0.7 µg versus 18.9±1.2 µg respectively, F1,6=9.210, p=0.0002). In conclusion, these experiments provide evidence for a directional growth signal that is present (and remains) in the ex-vivo RPE, but that does not remain in the ex-vivo retina. The identity of this factor(s) that can modify scleral cell DNA and GAG content requires further research.
Resumo:
The SimCalc Vision and Contributions Advances in Mathematics Education 2013, pp 419-436 Modeling as a Means for Making Powerful Ideas Accessible to Children at an Early Age Richard Lesh, Lyn English, Serife Sevis, Chanda Riggs … show all 4 hide » Look Inside » Get Access Abstract In modern societies in the 21st century, significant changes have been occurring in the kinds of “mathematical thinking” that are needed outside of school. Even in the case of primary school children (grades K-2), children not only encounter situations where numbers refer to sets of discrete objects that can be counted. Numbers also are used to describe situations that involve continuous quantities (inches, feet, pounds, etc.), signed quantities, quantities that have both magnitude and direction, locations (coordinates, or ordinal quantities), transformations (actions), accumulating quantities, continually changing quantities, and other kinds of mathematical objects. Furthermore, if we ask, what kind of situations can children use numbers to describe? rather than restricting attention to situations where children should be able to calculate correctly, then this study shows that average ability children in grades K-2 are (and need to be) able to productively mathematize situations that involve far more than simple counts. Similarly, whereas nearly the entire K-16 mathematics curriculum is restricted to situations that can be mathematized using a single input-output rule going in one direction, even the lives of primary school children are filled with situations that involve several interacting actions—and which involve feedback loops, second-order effects, and issues such as maximization, minimization, or stabilizations (which, many years ago, needed to be postponed until students had been introduced to calculus). …This brief paper demonstrates that, if children’s stories are used to introduce simulations of “real life” problem solving situations, then average ability primary school children are quite capable of dealing productively with 60-minute problems that involve (a) many kinds of quantities in addition to “counts,” (b) integrated collections of concepts associated with a variety of textbook topic areas, (c) interactions among several different actors, and (d) issues such as maximization, minimization, and stabilization.
Resumo:
A nanostructured Schottky diode was fabricated to sense hydrogen and propene gases in the concentration range of 0.06% to 1%. The ZnO sensitive layer was deposited on SiC substrate by pulse laser deposition technique. Scanning electron microscopy and X-ray diffraction characterisations revealed presence of wurtzite structured ZnO nanograins grown in the direction of (002) and (004). The nanostructured diode was investigated at optimum operating temperature of 260 °C. At a constant reverse current of 1 mA, the voltage shifts towards 1% hydrogen and 1% propene were measured as 173.3 mV and 191.8 mV, respectively.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
Monitoring fetal wellbeing is a compelling problem in modern obstetrics. Clinicians have become increasingly aware of the link between fetal activity (movement), well-being, and later developmental outcome. We have recently developed an ambulatory accelerometer-based fetal activity monitor (AFAM) to record 24-hour fetal movement. Using this system, we aim at developing signal processing methods to automatically detect and quantitatively characterize fetal movements. The first step in this direction is to test the performance of the accelerometer in detecting fetal movement against real-time ultrasound imaging (taken as the gold standard). This paper reports first results of this performance analysis.
Resumo:
Existing compliance management frameworks (CMFs) offer a multitude of compliance management capabilities that makes difficult for enterprises to decide on the suitability of a framework. Making a decision on the suitability requires a deep understanding of the functionalities of a framework. Gaining such an understanding is a difficult task which, in turn, requires specialised tools and methodologies for evaluation. Current compliance research lacks such tools and methodologies for evaluating CMFs. This paper reports a methodological evaluation of existing CMFs based on a pre-defined evaluation criteria. Our evaluation highlights what existing CMFs offer, and what they cannot. Also, it underpins various open questions and discusses the challenges in this direction.
Resumo:
Inspired by the wonderful properties of some biological composites in nature, we performed molecular dynamics simulations to investigate the mechanical behavior of bicontinuous nanocomposites. Three representative types of bicontinuous composites, which have regular network, random network, and nacre inspired microstructures respectively, were studied and the results were compared with those of a honeycomb nanocomposite with only one continuous phase. It was found that the mechanical strength of nanocomposites in a given direction strongly depends on the connectivity of microstructure in that direction. Directional isotropy in mechanical strength and easy manufacturability favor the random network nanocomposites as a potentially great bioinspired composite with balanced performances. In addition, the tensile strength of random network nanocomposites is less sensitive to the interfacial failure, owing to its super high interface-to-volume ratio and random distribution of internal interfaces. The results provide a useful guideline for design and optimization of advanced nanocomposites with superior mechanical properties.
Resumo:
The interaction between new two-dimensional carbon allotropes, i.e. graphyne (GP) and graphdiyne (GD), and light metal complex hydrides LiAlH4, LiBH4, and NaAlH4 was studied using density functional theory (DFT) incorporating long range van der Waals dispersion correction. The light metal complex hydrides show much stronger interaction with GP and GP than that with fullerene due to the well defined pore structure. Such strong interactions greatly affect the degree of charge donation from the alkali metal atom to AlH4 or BH4, consequently destabilizing the Al-H or B-H bonds. Compared to the isolated light metal complex hydride, the presence of GP or GD can lead to a significant reduction of the hydrogen removal energy. Most interestingly, the hydrogen removal energies for LiBHx on GP and with GD are found to be lowered at all the stages (x from 4 to 1) whereas the H-removal energy in the third stage is increased for LiBH4 on fullerene. In addition, the presence of uniformly distributed pores on GP and GD is expected to facilitate the dehydrogenation of light metal complex hydrides. The present results highlight new interesting materials to catalyze light metal complex hydrides for potential application as media for hydrogen storage. Since GD has been successfully synthesized in a recent experiment, we hope the present work will stimulate further experimental investigations in this direction.
Resumo:
This paper explores the use of subarrays as array elements. Benefits of such a concept include improved gain in any direction without significantly increasing the overall size of the array and enhanced pattern control. The architecture for an array of subarrays will be discussed via a systems approach. Individual system designs are explored in further details and proof of principle is illustrated through a manufactured examples.
Resumo:
This article presents new theoretical and empirical evidence on the forecasting ability of prediction markets. We develop a model that predicts that the time until expiration of a prediction market should negatively affect the accuracy of prices as a forecasting tool in the direction of a ‘favourite/longshot bias’. That is, high-likelihood events are underpriced, and low-likelihood events are over-priced. We confirm this result using a large data set of prediction market transaction prices. Prediction markets are reasonably well calibrated when time to expiration is relatively short, but prices are significantly biased for events farther in the future. When time value of money is considered, the miscalibration can be exploited to earn excess returns only when the trader has a relatively low discount rate.