310 resultados para Semi-Open
Resumo:
Standards are designed to promote the interoperability of products and systems by enabling different parties to develop technologies that can be used together. There is an increasing expectation in many technical communities, including open source communities, that standards will be ‘open’. However, standards are subject to legal rights which impact upon, not only their development, but also their implementation. Of central importance are intellectual property rights: technical standards may incorporate patented technologies, while the specification documents of standards are protected by copyright. This article provides an overview of the processes by which standards are developed and considers the concept of ‘interoperability’, the meaning of the term ‘open standard’ and how open standards contribute to interoperability. It explains how intellectual property rights operate in relation to standards and how they can be managed to create standards that are open, not only during their development, but also in implementation.
Resumo:
This book chapter considers recent developments in Australia and key jurisdictions both in relation to the formation of a national information strategy and the management of legal rights in public sector information.
Resumo:
This paper examines the proposition that increased ability to have a voice and be listened to, through ‘open ICT4D’ and ‘open content creation’ can be an effective mechanism for development. The paper discusses empirical work that strongly indicates that this only happens when voice is appropriately valued in the development process. Having a voice in development processes are less effective when participation is limited. Open ICT allows for more and more voices to be heard, but it is open ICT4D that has the obligation to ensure voices are listened to. In the paper I first explore participatory development and the idea of open ICT4D before elaborating on issues of voice and thinking about voice as process, and voice as value. Research findings are presented from research that experimented with participatory (or open) content creation, discussed in relation to notions of openness and voice. I then consider the challenges of listening, before drawing some conclusions about opening up ICT4D research.
Resumo:
Over the last decade, system integration has grown in popularity as it allows organisations to streamline business processes. Traditionally, system integration has been conducted through point-to-point solutions – as a new integration scenario requirement arises, a custom solution is built between the relevant systems. Bus-based solutions are now preferred, whereby all systems communicate via an intermediary system such as an enterprise service bus, using a common data exchange model. This research investigates the use of a common data exchange model based on open standards, specifically MIMOSA OSA-EAI, for asset management system integration. A case study is conducted that involves the integration of processes between a SCADA, maintenance decision support and work management system. A diverse number of software platforms are employed in developing the final solution, all tied together through MIMOSA OSA-EAI-based XML web services. The lessons learned from the exercise are presented throughout the paper.
Resumo:
In October 2008, the Australian Learning and Teaching Council (ALTC) released the final report for the commissioned project ePortfolio use by university students in Australia: Informing excellence in policy and practice. The Australian ePortfolio Project represented the first attempt to examine the breadth and depth of ePortfolio practice in the Australian higher education sector. The research activities included surveys of stakeholder groups in learning and teaching, academic management and human resource management, with respondents representing all Australian universities; a series of focus groups and semi-structured interviews which sought to explore key issues in greater depth; and surveys designed to capture students’ pre-course expectations and their post-course experiences of ePortfolio learning. Further qualitative data was collected through interviews with ‘mature users’ of ePortfolios. Project findings revealed that, while there was a high level of interest in the use of ePortfolios in terms of the potential to help students become reflective learners who were conscious of their personal and professional strengths and weaknesses, the state of play in Australian universities was very fragmented. The project investigation identified four individual, yet interrelated, contexts where strategies may be employed to support and foster effective ePortfolio practice in higher education: government policy, technical standards, academic policy, and learning and teaching. Four scenarios for the future were also presented with the goal of stimulating discussion about opportunities for stakeholder engagement. It is argued that the effective use of ePortfolios requires open dialogue and collaboration between the different stakeholders across this range of contexts.
Resumo:
This paper argues a model of complex system design for sustainable architecture within a framework of entropy evolution. The spectrum of sustainable architecture consists of the efficient use of energy and material resource in life-cycle of buildings, the active involvement of the occupants in micro-climate control within buildings, and the natural environmental context. The interactions of the parameters compose a complex system of sustainable architectural design, of which the conventional linear and fragmented design technologies are insufficient to indicate holistic and ongoing environmental performance. The complexity theory of dissipative structure states a microscopic formulation of open system evolution, which provides a system design framework for the evolution of building environmental performance towards an optimization of sustainability in architecture.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
The standard Blanchard-Quah (BQ) decomposition forces aggregate demand and supply shocks to be orthogonal. However, this assumption is problematic for a nation with an inflation target. The very notion of inflation targeting means that monetary policy reacts to changes in aggregate supply. This paper employs a modification of the BQ procedure that allows for correlated shifts in aggregate supply and demand. It is found that shocks to Australian aggregate demand and supply are highly correlated. The estimated shifts in the aggregate demand and supply curves are then used to measure the effects of inflation targeting on the Australian inflation rate and level of GDP.
Resumo:
An algorithm to improve the accuracy and stability of rigid-body contact force calculation is presented. The algorithm uses a combination of analytic solutions and numerical methods to solve a spring-damper differential equation typical of a contact model. The solution method employs the recently proposed patch method, which especially suits the spring-damper differential equations. The resulting semi-analytic solution reduces the stiffness of the differential equations, while performing faster than conventional alternatives.
Resumo:
The activities of governments, by their very nature, involve interactions with a broad array of public and private sector entities, from other governments, to business, academia and individual citizens. In the current era, there is a growing expectation that government programs and services will be delivered in a ‘simple, seamless and connected’ manner,1 leading to increased efficiency in government operations and improved service delivery.2 Achieving ‘collaborative, effective and efficient government and the delivery of seamless government services’ requires the implementation of interoperable technologies and procedures.3 Standards, which aim to enable organisations, platforms and systems to work with each other, are fundamental to interoperability.
Resumo:
This chapter provides an account of the use of Creative Commons (CC) licensing as a legally and operationally effective means by which governments can implement systems to enable open access to and reuse of their public sector information (PSI). It describes the experience of governments in Australia in applying CC licences to PSI in a context where a vast range of material and information produced, collected, commissioned of funded by government is subject to copyright. By applying CC licences, governments can give effect to their open access policies and create a public domain of PSI which is available for resue by other governmental agencies and the community at large.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.