961 resultados para Compensating Transactions
Resumo:
Real-time sales assistant service is a problematic component of remote delivery of sales support for customers. Solutions involving web pages, telephony and video support prove problematic when seeking to remotely guide customers in their sales processes, especially with transactions revolving around physically complex artefacts. This process involves a number of services that are often complex in nature, ranging from physical compatibility and configuration factors, to availability and credit services. We propose the application of a combination of virtual worlds and augmented reality to create synthetic environments suitable for remote sales of physical artefacts, right in the home of the purchaser. A high level description of the service structure involved is shown, along with a use case involving the sale of electronic goods and services within an example augmented reality application. We expect this work to have application in many sales domains involving physical objects needing to be sold over the Internet.
Resumo:
There is no doubt that fraud in relation to land transactions is a problem that resonates amongst land academics, practitioners, and stakeholders involved in conveyancing. As each land registration and conveyancing process increasingly moves towards a fully electronic environment, we need to make sure that we understand and guard against the frauds that can occur. What this paper does is examine the types of fraud that have occurred in paper-based conveyancing systems in Australia and considers how they might be undertaken in the National Electronic Conveyancing System (NECS) that is currently under development. Whilst no system can ever be infallible, it is suggested that by correctly imposing the responsibility for identity verification on the appropriate individual, the conveyancing system adopted can achieve the optimum level of fairness in terms of allocation of responsibility and loss. As we sit on the cusp of a new era of electronic conveyancing, the framework suggested here provides a model for minimising the risks of forged mortgages and appropriately allocating the loss. Importantly it also recognises that the electronic environment will see new opportunities for those with criminal intent to undermine the integrity of land transactions. An appreciation of this now, can see the appropriate measures put in place to minimise the risk.
Resumo:
In Australia, there is a crisis in science education with students becoming disengaged with canonical science in the middle years of schooling. One recent initiative that aims to improve student interest and motivation without diminishing conceptual understanding is the context-based approach. Contextual units that connect the canonical science with the students’ real world of their local community have been used in the senior years but are new in the middle years. This ethnographic study explored the learning transactions that occurred in one 9th grade science class studying an Environmental Science unit for 11 weeks. Data were derived from field notes, audio and video recorded conversations, interviews, student journals and classroom documents with a particular focus on two selected groups of students. Data were analysed qualitatively through coding for emergent themes. This paper presents an outline of the program and discussion of three assertions derived from the preliminary analysis of the data. Firstly, an integrated, coherent sequence of learning experiences that included weekly visits to a creek adjacent to the school enabled the teacher to contextualise the science in the students’ local community. Secondly, content was predominantly taught on a need-to-know basis and thirdly, the lesson sequence aligned with a model for context-based teaching. Research, teaching and policy implications of these results for promoting the context-based teaching of science in the middle years are discussed.
Resumo:
Transmission smart grids will use a digital platform for the automation of high voltage substations. The IEC 61850 series of standards, released in parts over the last ten years, provide a specification for substation communications networks and systems. These standards, along with IEEE Std 1588-2008 Precision Time Protocol version 2 (PTPv2) for precision timing, are recommended by the both IEC Smart Grid Strategy Group and the NIST Framework and Roadmap for Smart Grid Interoperability Standards for substation automation. IEC 61850, PTPv2 and Ethernet are three complementary protocol families that together define the future of sampled value digital process connections for smart substation automation. A time synchronisation system is required for a sampled value process bus, however the details are not defined in IEC 61850-9-2. PTPv2 provides the greatest accuracy of network based time transfer systems, with timing errors of less than 100 ns achievable. The suitability of PTPv2 to synchronise sampling in a digital process bus is evaluated, with preliminary results indicating that steady state performance of low cost clocks is an acceptable ±300 ns, but that corrections issued by grandmaster clocks can introduce significant transients. Extremely stable grandmaster oscillators are required to ensure any corrections are sufficiently small that time synchronising performance is not degraded.
Resumo:
A new approach to pattern recognition using invariant parameters based on higher order spectra is presented. In particular, invariant parameters derived from the bispectrum are used to classify one-dimensional shapes. The bispectrum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale and amplification invariant, as well. A minimal set of these invariants is selected as the feature vector for pattern classification, and a minimum distance classifier using a statistical distance measure is used to classify test patterns. The classification technique is shown to distinguish two similar, but different bolts given their one-dimensional profiles. Pattern recognition using higher order spectral invariants is fast, suited for parallel implementation, and has high immunity to additive Gaussian noise. Simulation results show very high classification accuracy, even for low signal-to-noise ratios.
Resumo:
Higher order spectral analysis is used to investigate nonlinearities in time series of voltages measured from a realization of Chua's circuit. For period-doubled limit cycles, quadratic and cubic nonlinear interactions result in phase coupling and energy exchange between increasing numbers of triads and quartets of Fourier components as the nonlinearity of the system is increased. For circuit parameters that result in a chaotic Rossler-type attractor, bicoherence and tricoherence spectra indicate that both quadratic and cubic nonlinear interactions are important to the dynamics. When the circuit exhibits a double-scroll chaotic attractor the bispectrum is zero, but the tricoherences are high, consistent with the importance of higher-than-second order nonlinear interactions during chaos associated with the double scroll.
Resumo:
Statistics of the estimates of tricoherence are obtained analytically for nonlinear harmonic random processes with known true tricoherence. Expressions are presented for the bias, variance, and probability distributions of estimates of tricoherence as functions of the true tricoherence and the number of realizations averaged in the estimates. The expressions are applicable to arbitrary higher order coherence and arbitrary degree of interaction between modes. Theoretical results are compared with those obtained from numerical simulations of nonlinear harmonic random processes. Estimation of true values of tricoherence given observed values is also discussed
Resumo:
A number of game strategies have been developed in past decades and used in the fields of economics, engineering, computer science, and biology due to their efficiency in solving design optimization problems. In addition, research in multiobjective and multidisciplinary design optimization has focused on developing a robust and efficient optimization method so it can produce a set of high quality solutions with less computational time. In this paper, two optimization techniques are considered; the first optimization method uses multifidelity hierarchical Pareto-optimality. The second optimization method uses the combination of game strategies Nash-equilibrium and Pareto-optimality. This paper shows how game strategies can be coupled to multiobjective evolutionary algorithms and robust design techniques to produce a set of high quality solutions. Numerical results obtained from both optimization methods are compared in terms of computational expense and model quality. The benefits of using Hybrid and non-Hybrid-Game strategies are demonstrated.
Resumo:
The fundamental personal property rule – no one can transfer a better title to property than they had – is subject to exceptions in the Sale of Goods legislation, which aim to protect innocent buyers who are deceived by a seller’s apparent physical possession of property. These exceptions cover a limited range of transactions and are restrictive in their operation. Australia now has national legislation - the Personal Property Securities Act 2009 (Cth) - which will apply to many transactions outside the scope of the Sale of Goods Act and which includes rules for sales by non-owners which will provide exceptions to the nemo dat quod non habet rule for many common commercial transactions. This article explores the effect of the Personal Property Securities Act 2009 (Cth) on the Sale of Goods exceptions, explains that the new provisions are so wide that there is little continuing relevance for the Sale of Goods Act exceptions, and indicates where they may still apply.
Resumo:
The Web has become a worldwide repository of information which individuals, companies, and organizations utilize to solve or address various information problems. Many of these Web users utilize automated agents to gather this information for them. Some assume that this approach represents a more sophisticated method of searching. However, there is little research investigating how Web agents search for online information. In this research, we first provide a classification for information agent using stages of information gathering, gathering approaches, and agent architecture. We then examine an implementation of one of the resulting classifications in detail, investigating how agents search for information on Web search engines, including the session, query, term, duration and frequency of interactions. For this temporal study, we analyzed three data sets of queries and page views from agents interacting with the Excite and AltaVista search engines from 1997 to 2002, examining approximately 900,000 queries submitted by over 3,000 agents. Findings include: (1) agent sessions are extremely interactive, with sometimes hundreds of interactions per second (2) agent queries are comparable to human searchers, with little use of query operators, (3) Web agents are searching for a relatively limited variety of information, wherein only 18% of the terms used are unique, and (4) the duration of agent-Web search engine interaction typically spans several hours. We discuss the implications for Web information agents and search engines.
Resumo:
Purpose: Web search engines are frequently used by people to locate information on the Internet. However, not all queries have an informational goal. Instead of information, some people may be looking for specific web sites or may wish to conduct transactions with web services. This paper aims to focus on automatically classifying the different user intents behind web queries. Design/methodology/approach: For the research reported in this paper, 130,000 web search engine queries are categorized as informational, navigational, or transactional using a k-means clustering approach based on a variety of query traits. Findings: The research findings show that more than 75 percent of web queries (clustered into eight classifications) are informational in nature, with about 12 percent each for navigational and transactional. Results also show that web queries fall into eight clusters, six primarily informational, and one each of primarily transactional and navigational. Research limitations/implications: This study provides an important contribution to web search literature because it provides information about the goals of searchers and a method for automatically classifying the intents of the user queries. Automatic classification of user intent can lead to improved web search engines by tailoring results to specific user needs. Practical implications: The paper discusses how web search engines can use automatically classified user queries to provide more targeted and relevant results in web searching by implementing a real time classification method as presented in this research. Originality/value: This research investigates a new application of a method for automatically classifying the intent of user queries. There has been limited research to date on automatically classifying the user intent of web queries, even though the pay-off for web search engines can be quite beneficial. © Emerald Group Publishing Limited.
Resumo:
Object segmentation is one of the fundamental steps for a number of robotic applications such as manipulation, object detection, and obstacle avoidance. This paper proposes a visual method for incorporating colour and depth information from sequential multiview stereo images to segment objects of interest from complex and cluttered environments. Rather than segmenting objects using information from a single frame in the sequence, we incorporate information from neighbouring views to increase the reliability of the information and improve the overall segmentation result. Specifically, dense depth information of a scene is computed using multiple view stereo. Depths from neighbouring views are reprojected into the reference frame to be segmented compensating for imperfect depth computations for individual frames. The multiple depth layers are then combined with color information from the reference frame to create a Markov random field to model the segmentation problem. Finally, graphcut optimisation is employed to infer pixels belonging to the object to be segmented. The segmentation accuracy is evaluated over images from an outdoor video sequence demonstrating the viability for automatic object segmentation for mobile robots using monocular cameras as a primary sensor.
Resumo:
This article examines recent changes to the Building Act 1975 (Qld) intended to promote pool safety in Queensland. The impact of these statutory changes is considered in relation to both compliance obligations and disclosure obligations associated with sale and leasing transactions. The interrelationship of these changes with the operation of standard contractual provisions in Queensland is also examined.
Resumo:
Pedestrians’ use of mp3 players or mobile phones can pose the risk of being hit by motor vehicles. We present an approach for detecting a crash risk level using the computing power and the microphone of mobile devices that can be used to alert the user in advance of an approaching vehicle so as to avoid a crash. A single feature extractor classifier is not usually able to deal with the diversity of risky acoustic scenarios. In this paper, we address the problem of detection of vehicles approaching a pedestrian by a novel, simple, non resource intensive acoustic method. The method uses a set of existing statistical tools to mine signal features. Audio features are adaptively thresholded for relevance and classified with a three component heuristic. The resulting Acoustic Hazard Detection (AHD) system has a very low false positive detection rate. The results of this study could help mobile device manufacturers to embed the presented features into future potable devices and contribute to road safety.