975 resultados para Spill-Over
Resumo:
Mirroring the trends in other developed countries, levels of household debt in Australia have risen markedly in recent years. As one example, the total amount lent by banks to individuals has risen from $175.5 billion in August 1995 to $590.5 billion in August 2005.1 Consumer groups an~ media commentators here have long raised concerns about the risks of increasing levels of household debt and over-commitment, linking these issues at least in part to irresponsible lending practices. And more recently, the Reserve Bank Governor has also expressed concerns about the ability 'of some households to manage if personal or economic circumstances change.2
Resumo:
This article examines a preliminary review and the limited evidence of over-regulation in Australian financial services. The 1997 Wallis Report and the CLERP 6 paper resulted in the amendments to Ch 7 of the Corporations Act 2001 (Cth) by the Financial Services Reform Act. Nearly a decade later the system based upon 'one-size fits all' dual track regime and a consistent licensing regime has greatly increased the costs of compliance. In the area of enforcement there has not been a dramatic change to the effective techniques applied by ASIC over other agencies such as APRA. In particular there are clear economic arguments, as well as international experiences which state that a single financial services regulator is more effective than the multi-layered approach adopted in Australia. Finally, in the superannuation area of financial services, which is worth A$800 billion there is unnecessary dual licensing and duplicated regulation with little evidence of any consumer-member benefit but at a much greater cost
Resumo:
We present algorithms, systems, and experimental results for underwater data muling. In data muling a mobile agent interacts with static agents to upload, download, or transport data to a different physical location. We consider a system comprising an Autonomous Underwater Vehicle (AUV) and many static Underwater Sensor Nodes (USN) networked together optically and acoustically. The AUV can locate the static nodes using vision and hover above the static nodes for data upload. We describe the hardware and software architecture of this underwater system, as well as experimental data. © 2006 IEEE.
Resumo:
In this paper we describe the recent development of a low-bandwidth wireless camera sensor network. We propose a simple, yet effective, network architecture which allows multiple cameras to be connected to the network and synchronize their communication schedules. Image compression of greater than 90% is performed at each node running on a local DSP coprocessor, resulting in nodes using 1/8th the energy compared to streaming uncompressed images. We briefly introduce the Fleck wireless node and the DSP/camera sensor, and then outline the network architecture and compression algorithm. The system is able to stream color QVGA images over the network to a base station at up to 2 frames per second. © 2007 IEEE.
Resumo:
Struggles over Difference addresses education, schools, textbooks, and pedagogies in various countries of the Asia-Pacific, offering critical curriculum studies and policy analyses of national and regional educational systems. These systems face challenges linked to new economic formations, cultural globalization, and emergent regional and international geopolitical instabilities and conflicts. Contributors offer insights on how official knowledge, text, discourse and discipline should be shaped; who should shape it; through which institutional agencies it should be administered: and social and cultural practices through which this should occur.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
Findings from an Australian case study of adult women expose general, light and basic use of mobile phones. Participants used their mobile phone mainly for coordination and to a lesser extent for practicing intrinsic interactions motivated by emotional support purposes. This paper focuses on social and emotional support over the mobile phone. Though crucial to individuals, emotional support seems to be a neglected area of research among mobile communication studies, all the more so when focusing on adult women. This study addresses this literature gap. The empirical findings are based on a case study of 26 women over 35 years of age residing in one coastal Australian town. The research design included a combination of quantitative and qualitative methods. This paper examines the communication methods adult women use for social and emotional support, and analyses reasons and social implications of this limited intrinsic communication use pattern over the mobile phone.
Resumo:
AFTER a great deal of success with last year's "emo" adaptation of Hamlet, David Berthold begins La Boite Theatre Company's 2011 season, his second season at the helm, with an adaptation of Julius Caesar.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
Real‐time kinematic (RTK) GPS techniques have been extensively developed for applications including surveying, structural monitoring, and machine automation. Limitations of the existing RTK techniques that hinder their applications for geodynamics purposes are twofold: (1) the achievable RTK accuracy is on the level of a few centimeters and the uncertainty of vertical component is 1.5–2 times worse than those of horizontal components and (2) the RTK position uncertainty grows in proportional to the base‐torover distances. The key limiting factor behind the problems is the significant effect of residual tropospheric errors on the positioning solutions, especially on the highly correlated height component. This paper develops the geometry‐specified troposphere decorrelation strategy to achieve the subcentimeter kinematic positioning accuracy in all three components. The key is to set up a relative zenith tropospheric delay (RZTD) parameter to absorb the residual tropospheric effects and to solve the established model as an ill‐posed problem using the regularization method. In order to compute a reasonable regularization parameter to obtain an optimal regularized solution, the covariance matrix of positional parameters estimated without the RZTD parameter, which is characterized by observation geometry, is used to replace the quadratic matrix of their “true” values. As a result, the regularization parameter is adaptively computed with variation of observation geometry. The experiment results show that new method can efficiently alleviate the model’s ill condition and stabilize the solution from a single data epoch. Compared to the results from the conventional least squares method, the new method can improve the longrange RTK solution precision from several centimeters to the subcentimeter in all components. More significantly, the precision of the height component is even higher. Several geosciences applications that require subcentimeter real‐time solutions can largely benefit from the proposed approach, such as monitoring of earthquakes and large dams in real‐time, high‐precision GPS leveling and refinement of the vertical datum. In addition, the high‐resolution RZTD solutions can contribute to effective recovery of tropospheric slant path delays in order to establish a 4‐D troposphere tomography.
Resumo:
This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.
Resumo:
This study examines if outcome expectancies (perceived consequences of engaging in certain behavior) and self- efficacy expectancies (confidence in personal capacity to regulate behavior) contribute to treatment outcome for alcohol dependence. Few clinical studies have examined these constructs. The Drinking Expectancy Profile (DEP), a psychometric measure of alcohol expectancy and drinking refusal selfefficacy, was administered to 298 alcohol-dependent patients (207 males) at assessment and on completion of a 12-week cognitive–behavioral therapy alcohol abstinence program. Baseline measures of expectancy and self-efficacy were not strong predictors of outcome. However, for the 164 patients who completed treatment, all alcohol expectancy and self-efficacy factors of the DEP showed change over time. The DEP scores approximated community norms at the end of treatment. Discriminant analysis indicated that change in social pressure drinking refusal self-efficacy, sexual enhancement expectancies, and assertion expectancies successfully discriminated those who successfully completed treatment from those who did not. Future research should examine the basis of expectancies related to social functioning as a possible mechanism of treatment response and a means to enhance treatment outcome.