286 resultados para code source
Resumo:
Long-term measurements of particle number size distribution (PNSD) produce a very large number of observations and their analysis requires an efficient approach in order to produce results in the least possible time and with maximum accuracy. Clustering techniques are a family of sophisticated methods which have been recently employed to analyse PNSD data, however, very little information is available comparing the performance of different clustering techniques on PNSD data. This study aims to apply several clustering techniques (i.e. K-means, PAM, CLARA and SOM) to PNSD data, in order to identify and apply the optimum technique to PNSD data measured at 25 sites across Brisbane, Australia. A new method, based on the Generalised Additive Model (GAM) with a basis of penalised B-splines, was proposed to parameterise the PNSD data and the temporal weight of each cluster was also estimated using the GAM. In addition, each cluster was associated with its possible source based on the results of this parameterisation, together with the characteristics of each cluster. The performances of four clustering techniques were compared using the Dunn index and Silhouette width validation values and the K-means technique was found to have the highest performance, with five clusters being the optimum. Therefore, five clusters were found within the data using the K-means technique. The diurnal occurrence of each cluster was used together with other air quality parameters, temporal trends and the physical properties of each cluster, in order to attribute each cluster to its source and origin. The five clusters were attributed to three major sources and origins, including regional background particles, photochemically induced nucleated particles and vehicle generated particles. Overall, clustering was found to be an effective technique for attributing each particle size spectra to its source and the GAM was suitable to parameterise the PNSD data. These two techniques can help researchers immensely in analysing PNSD data for characterisation and source apportionment purposes.
Resumo:
This study developed a comprehensive research methodology for identification and quantification of sources responsible for pollutant build-up and wash-off from urban road surfaces. The study identified soil and asphalt wear, and non-combusted diesel fuel as the most influential sources for metal and hydrocarbon pollution respectively. The study also developed mathematical models to relate contributions from identified sources to underlying site specific factors such as land use and traffic. Developed mathematical model will play a key role in urban planning practices, enabling the implementation of effective water pollution control strategies.
Resumo:
This paper presents a framework for synchronising multiple triggered sensors with respect to a local clock using standard computing hardware. Providing sensor measurements with accurate and meaningful timestamps is important for many sensor fusion, state estimation and control applications. Accurately synchronising sensor timestamps can be performed with specialised hardware, however, performing sensor synchronisation using standard computing hardware and non-real-time operating systems is difficult due to inaccurate and temperature sensitive clocks, variable communication delays and operating system scheduling delays. Results show the ability of our framework to estimate time offsets to sub-millisecond accuracy. We also demonstrate how synchronising timestamps with our framework results in a tenfold reduction in image stabilisation error for a vehicle driving on rough terrain. The source code will be released as an open source tool for time synchronisation in ROS.
Resumo:
Project overview, promotional poster and how to access and use the checklist (student guide)
Resumo:
In making this submission, we suggest that Australia learn from the experiences of other jurisdictions, and avoid some of the mistakes that have been made. In particular, this involves: * Ensuring that adequate information is available to evaluate the success of the scheme * Ensuring that notices sent to consumers provide full and accurate information that helps them understand their rights and options * Limiting the potential abuse of the system, and particularly attempts to intimidate consumers into paying unfair penalties through ‘speculative invoicing’ * Avoiding the potential for actual or perceived bias in the scheme’s oversight body
Resumo:
The Code of Banking Practice is one of the oldest examples of consumer protection provided through self-regulation in the Australian financial services sector. However, since the Banking Code was first released in 1993, the volume of consumer protection legislation applying to banks has increased exponentially and parts of the Banking Code that once provided new consumer rights have now been largely superseded by legislation. In light of the increasingly complex set of laws and regulations that govern the relationship between banks and their consumer and small business customers it could be argued that the Banking Code has a limited future role. However, an analysis of the Banking Code shows that it adds to the consumer protection standards provided by legislation and can continue to facilitate improvements in the standards of subscribing banks and of other institutions in the financial services sector. Self-regulation and industry codes should continue to be part of the regulatory mix. Any regulatory changes that flow from the recent Financial System Inquiry should also facilitate and support the self-regulation role, but the government should also consider further changes to encourage improvements in industry codes and ensure that the implicit regulatory benefits that are provided, in part, because of the existence of industry codes, are made explicit and made available only to code subscribers.
Resumo:
Background Depression is a common psychiatric disorder in older people. The study aimed to examine the screening accuracy of the Geriatric Depression Scale (GDS) and the Collateral Source version of the Geriatric Depression Scale (CS-GDS) in the nursing home setting. Methods Eighty-eight residents from 14 nursing homes were assessed for depression using the GDS and the CS-GDS, and validated against clinician diagnosed depression using the Semi-structured Clinical Diagnostic Interview for DSM-IV-TR Axis I Disorders (SCID) for residents without dementia and the Provisional Diagnostic Criteria for Depression in Alzheimer Disease (PDCdAD) for those with dementia. The screening performances of five versions of the GDS (30-, 15-, 10-, 8-, and 4-item) and two versions of the CS-GDS (30- and 15-item) were analyzed using receiver operating characteristic (ROC) curves. Results Among residents without dementia, both the self-rated (AUC = 0.75–0.79) and proxy-rated (AUC = 0.67) GDS variations performed significantly better than chance in screening for depression. However, neither instrument adequately identified depression among residents with dementia (AUC between 0.57 and 0.70). Among the GDS variations, the 4- and 8-item scales had the highest AUC and the optimal cut-offs were >0 and >3, respectively. Conclusions The validity of the GDS in detecting depression requires a certain level of cognitive functioning. While the CS-GDS is designed to remedy this issue by using an informant, it did not have adequate validity in detecting depression among residents with dementia. Further research is needed on informant selection and other factors that can potentially influence the validity of proxy-based measures in the nursing home setting.
Resumo:
Travellers are spoilt by holiday choice, and yet will usually only seriously consider a few destinations during the decision process. With thousands of destination marketing organisations (DMOs) competing for attention, places are becoming increasingly substitutable. The study of destination competitiveness is an emerging field, and thesis contributes to an enhanced understanding by addressing three topics that have received relatively little attention in the tourism literature: destination positioning, the context of short break holidays, and domestic travel in New Zealand. A descriptive model of positioning as a source of competitive advantage is developed, and tested through 12 propositions. The destination of interest is Rotorua, which was arguably New Zealand’s first tourist destination. The market of interest is Auckland, which is Rotorua’s largest visitor market. Rotorua’s history is explored to identify factors that may have contributed to the destination’s current image in the Auckland market. A mix of qualitative and quantitative procedures is then utilised to determine Rotorua’s position, relative to a competing set of destinations. Based on an applied research problem, the thesis attempts to bridge the gap between academia and industry by providing useable results and benchmarks for five regional tourism organisations (RTOs). It is proposed that, in New Zealand, the domestic short break market represents a valuable opportunity not explicitly targeted by the competitive set of destinations. Conceptually, the thesis demonstrates the importance of analysing a destination’s competitive position, from the demand perspective, in a travel context; and then the value of comparing this ‘ideal’ position with that projected by the RTO. The thesis concludes Rotorua’s market position in the Auckland short break segment represents a source of comparative advantage, but is not congruent with the current promotional theme, which is being used in all markets. The findings also have implications for destinations beyond the context of the thesis. In particular, a new definition for ‘destination attractiveness’ is proposed, which warrants consideration in the design of future destination positioning analyses.
Resumo:
The Source Monitoring Framework is a promising model of constructive memory, yet fails because it is connectionist and does not allow content tagging. The Dual-Process Signal Detection Model is an improvement because it reduces mnemic qualia to a single memory signal (or degree of belief), but still commits itself to non-discrete representation. By supposing that ‘tagging’ means the assignment of propositional attitudes to aggregates of anemic characteristics informed inductively, then a discrete model becomes plausible. A Bayesian model of source monitoring accounts for the continuous variation of inputs and assignment of prior probabilities to memory content. A modified version of the High-Threshold Dual-Process model is recommended to further source monitoring research.
Resumo:
On the 12th June 2014, Elon Musk, the chief executive officer of the electric car manufacturer, Tesla Motors, announced in a blog that ‘all our patents belong to you.’ He explained that the company would adopt an open source philosophy in respect of its intellectual property in order to encourage the development of the industry of electric cars, and address the carbon crisis. Elon Musk made the dramatic, landmark announcement: Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.
Resumo:
This study aimed to take existing anatomical models of pregnant women, currently used for radiation pro-tection and nuclear medicine dose calculations, and adapt them for use in the calculation of fetal dose from external beam radiotherapy (EBRT). The models investigated were ‘KATJA’, which was provided as an MCNPX geometry file, and ‘RPI-P6’, which was provided in a simple, voxelized bina-ry format. In-house code was developed, to convert both mod-els into an `egsphant’ format, suitable for use with DOSXYZnrc. The geometries and densities of the resulting phantoms were evaluated and found to accurately represent the source data. As an example of the use of the phantoms, the delivery of a cranial EBRT treatment was simulated using the BEAMnrc and DOSXYZnrc Monte Carlo codes and the likely out-of-field doses to the fetus in each model was calculated. The results of these calculations showed good agreement (with-in one standard deviation) between the doses calculated in KATJA and PRI-P6, despite substantial anatomical differ-ences between the two models. For a 36 Gy prescription dose to a 233.2 cm3 target in the right brain, the mean doses calcu-lated in a region of interest covering the entire uterus were 1.0 +/- 0.6 mSv for KATJA and 1.3 +/- 0.9 mSv for RPI-P6. This work is expected to lead to more comprehensive studies of EBRT treatment plan design and its effects on fetal dose in the future.
Resumo:
Frequency Domain Spectroscopy (FDS) is one of the major techniques used for determining the condition of the cellulose based paper and pressboard components in large oil/paper insulated power transformers. This technique typically makes use of a sinusoidal voltage source swept from 0.1 mHz to 1 kHz. The excitation test voltage source used must meet certain characteristics, such as high output voltage, high fidelity, low noise and low harmonic content. The amplifier used; in the test voltage source; must be able to drive highly capacitive loads. This paper proposes that a switch-mode assisted linear amplifier (SMALA) can be used in the test voltage source to meet these criteria. A three level SMALA prototype amplifier was built to experimentally demonstrate the effectiveness of this proposal. The developed SMALA prototype shows no discernable harmonic distortion in the output voltage waveform, or the need for output filters, and is therefore seen as a preferable option to pulse width modulated digital amplifiers. The lack of harmonic distortion and high frequency switching noise in the output voltage of this SMALA prototype demonstrates its feasibility for applications in FDS, particularly on highly capacitive test objects such as transformer insulation systems.
Resumo:
Law is saturated with stories. People tell their stories to lawyers; lawyers tell their clients’ stories to courts; legislators develop regulation to respond to their constituents’ stories of injustice or inequality. In legal education, professors devise hypothetical scenarios to test student understanding of legal doctrine; in law examinations and assignments, students construct advice to fictional clients. The common law legal system derives many of its foundational principles from case law — in effect, stories with legal solutions — that have accumulated over time. The civil law system, despite a different design centred on legal codes, also relies on judicial story-telling to interpret the code provisions and flesh out the gaps.
Resumo:
The world has experienced a large increase in the amount of available data. Therefore, it requires better and more specialized tools for data storage and retrieval and information privacy. Recently Electronic Health Record (EHR) Systems have emerged to fulfill this need in health systems. They play an important role in medicine by granting access to information that can be used in medical diagnosis. Traditional systems have a focus on the storage and retrieval of this information, usually leaving issues related to privacy in the background. Doctors and patients may have different objectives when using an EHR system: patients try to restrict sensible information in their medical records to avoid misuse information while doctors want to see as much information as possible to ensure a correct diagnosis. One solution to this dilemma is the Accountable e-Health model, an access protocol model based in the Information Accountability Protocol. In this model patients are warned when doctors access their restricted data. They also enable a non-restrictive access for authenticated doctors. In this work we use FluxMED, an EHR system, and augment it with aspects of the Information Accountability Protocol to address these issues. The Implementation of the Information Accountability Framework (IAF) in FluxMED provides ways for both patients and physicians to have their privacy and access needs achieved. Issues related to storage and data security are secured by FluxMED, which contains mechanisms to ensure security and data integrity. The effort required to develop a platform for the management of medical information is mitigated by the FluxMED's workflow-based architecture: the system is flexible enough to allow the type and amount of information being altered without the need to change in your source code.