856 resultados para complexity
Resumo:
Health Information Systems (HIS) make extensive use of Information and Communication Technologies (ICT). The use of ICT aids in improving the quality and efficiency of healthcare services by making healthcare information available at the point of care (Goldstein, Groen, Ponkshe, and Wine, 2007). The increasing availability of healthcare data presents security and privacy issues which have not yet been fully addressed (Liu, Caelli, May, and Croll, 2008a). Healthcare organisations have to comply with the security and privacy requirements stated in laws, regulations and ethical standards, while managing healthcare information. Protecting the security and privacy of healthcare information is a very complex task (Liu, May, Caelli and Croll, 2008b). In order to simplify the complexity of providing security and privacy in HIS, appropriate information security services and mechanisms have to be implemented. Solutions at the application layer have already been implemented in HIS such as those existing in healthcare web services (Weaver et al., 2003). In addition, Discretionary Access Control (DAC) is the most commonly implemented access control model to restrict access to resources at the OS layer (Liu, Caelli, May, Croll and Henricksen, 2007a). Nevertheless, the combination of application security mechanisms and DAC at the OS layer has been stated to be insufficient in satisfying security requirements in computer systems (Loscocco et al., 1998). This thesis investigates the feasibility of implementing Security Enhanced Linux (SELinux) to enforce a Role-Based Access Control (RBAC) policy to help protect resources at the Operating System (OS) layer. SELinux provides Mandatory Access Control (MAC) mechanisms at the OS layer. These mechanisms can contain the damage from compromised applications and restrict access to resources according to the security policy implemented. The main contribution of this research is to provide a modern framework to implement and manage SELinux in HIS. The proposed framework introduces SELinux Profiles to restrict access permissions over the system resources to authorised users. The feasibility of using SELinux profiles in HIS was demonstrated through the creation of a prototype, which was submitted to various attack scenarios. The prototype was also subjected to testing during emergency scenarios, where changes to the security policies had to be made on the spot. Attack scenarios were based on vulnerabilities common at the application layer. SELinux demonstrated that it could effectively contain attacks at the application layer and provide adequate flexibility during emergency situations. However, even with the use of current tools, the development of SELinux policies can be very complex. Further research has to be made in order to simplify the management of SELinux policies and access permissions. In addition, SELinux related technologies, such as the Policy Management Server by Tresys Technologies, need to be researched in order to provide solutions at different layers of protection.
Resumo:
Information and communication technology (ICT) curriculum integration is the apparent goal of an extensive array of educational initiatives in all Australian states and territories. However, ICT curriculum integration is neither value neutral nor universally understood. The literature indicates the complexity of rationales and terminology that underwrite various initiatives; various dimensions and stages of integration; inherent methodological difficulties; obstacles to integration; and significant issues relating to teacher professional development and ICT competencies (Jamieson-Proctor, Watson, & Finger, 2003). This paper investigates the overarching question: Are ICT integration initiatives making a significant impact on teaching and learning in Queensland state schools? It reports the results from a teacher survey that measures the quantity and quality of student use of ICT. Results from 929 teachers across all year levels and from 38 Queensland state schools indicate that female teachers (73% of the full time teachers in Queensland state schools in 2005) are significantly less confident than their male counterparts in using ICT with students for teaching and learning, and there is evidence of significant resistance to using ICT to align curriculum with new times and new technologies. This result supports the hypothesis that current initiatives with ICT are having uneven and less than the desired results system wide. These results require further urgent investigation in order to address the factors that currently constrain the use of ICT for teaching and learning.
Resumo:
Principal Topic In this paper we seek to highlight the important intermediate role that the gestation process plays in entrepreneurship by examining its key antecedents and its consequences for new venture emergence. In doing so we take a behavioural perspective and argue that it is not only what a nascent venture is, but what it does (Katz & Gartner, 1988; Shane & Delmar, 2004; Reynolds, 2007) and when it does it during start-up (Reynolds & Miller, 1992; Lichtenstein, Carter, Dooley & Gartner, 2007) that is important. To extend an analogy from biological development, what we suggest is that the way a new venture is nurtured is just as fundamental as its nature. Much prior research has focused on the nature of new ventures and attempted to attribute variations in outcomes directly to the impact resource endowments and investments have. While there is little doubt that venture resource attributes such as human capital, and specifically prior entrepreneurial experience (Alsos & Kolvereid, 1998), access to social (Davidsson & Honig, 2003) and financial capital have an influence. Resource attributes themselves are distal from successful start-up endeavours and remain inanimate if not for the actions of the nascent venture. The key contribution we make is to shift focus from whether or not actions are taken, but when these actions happen and how that is situated in the overall gestation process. Thus, we suggest that it is gestation process dynamics, or when gestation actions occur, that is more proximal to venture outcomes and we focus on this. Recently scholars have highlighted the complexity that exists in the start-up or gestation process, be it temporal or contextual (Liao, Welsch & Tan, 2005; Lichtenstein et al. 2007). There is great variation in how long a start-up process might take (Reynolds & Miller, 1992), some processes require less action than others (Carter, Gartner & Reynolds, 1996), and the overall intensity of the start-up effort is also deemed important (Reynolds, 2007). And, despite some evidence that particular activities are more influential than others (Delmar & Shane, 2003), the order in which events may happen is, until now, largely indeterminate as regard its influence on success (Liao & Welsch, 2008). We suggest that it is this complexity of the intervening gestation process that attenuates the effect of resource endowment and has resulted in mixed findings in previous research. Thus, in order to reduce complexity we shall take a holistic view of the gestation process and argue that it is its’ dynamic properties that determine nascent venture attempt outcomes. Importantly, we acknowledge that particular gestation processes of themselves would not guarantee successful start-up, but it is more correctly the fit between the process dynamics and the ventures attributes (Davidsson, 2005) that is influential. So we aim to examine process dynamics by comparing sub-groups of venture types by resource attributes. Thus, as an initial step toward unpacking the complexity of the gestation process, this paper aims to establish the importance of its role as an intermediary between attributes of the nascent venture and the emergence of that venture. Here, we make a contribution by empirically examining gestation process dynamics and their fit with venture attributes. We do this by firstly, examining that nature of the influence that venture attributes such as human and social capital have on the dynamics of the gestation process, and secondly by investigating the effect that gestation process dynamics have on venture creation outcomes. Methodology and Propositions In order to explore the importance that gestation processes dynamics have in nascent entrepreneurship we conduct an empirical study of ventures start-ups. Data is drawn from a screened random sample of 625 Australian nascent business ventures prior to them achieving consistent outcomes in the market. This data was collected during 2007/8 and 2008/9 as part of the Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE) project (Davidsson et al., 2008). CAUSEE is a longitudinal panel study conducted over four years, sourcing information from annually administered telephone surveys. Importantly for our study, this methodology allows for the capture and tracking of active nascent venture creation as it happens, thus reducing hindsight and selection biases. In addition, improved tests of causality may be made given that outcome measures are temporally removed from preceding events. The data analysed in this paper represents the first two of these four years, and for the first time has access to follow-up outcome measures for these venture attempts: where 260 were successful, 126 were abandoned, and 191 are still in progress. With regards to venture attributes as gestation process antecedents, we examine specific human capital measured as successful prior experience in entrepreneurship, and direct social capital of the venture as ‘team start-ups’. In assessing gestation process dynamics we follow Lichtenstein et al. (2007) to suggest that the rate, concentration and timing of gestation activities may be used to summarise the complexity dynamics of that process. In addition, we extend this set of measures to include the interaction of discovery and exploitation by way of changes made to the venture idea. Those ventures with successful prior experience or those who conduct symbiotic parallel start-up attempts may be able to, or be forced to, leave their gestation action until later and still derive a successful outcome. In addition access to direct social capital may provide the support upon which the venture may draw in order to persevere in the face of adversity, turning a seemingly futile start-up attempt into a success. On the other hand prior experience may engender the foresight to terminate a venture attempt early should it be seen to be going nowhere. The temporal nature of these conjectures highlight the importance that process dynamics play and will be examined in this research Statistical models are developed to examine gestation process dynamics. We use multivariate general linear modelling to analyse how human and social capital factors influence gestation process dynamics. In turn, we use event history models and stratified Cox regression to assess the influence that gestation process dynamics have on venture outcomes. Results and Implications What entrepreneurs do is of interest to both scholars and practitioners’ alike. Thus the results of this research are important since they focus on nascent behaviour and its outcomes. While venture attributes themselves may be influential this is of little actionable assistance to practitioners. For example it is unhelpful to say to the prospective first time entrepreneur “you’ll be more successful if you have lots of prior experience in firm start-ups”. This research attempts to close this relevance gap by addressing what gestation behaviours might be appropriate, when actions best be focused, and most importantly in what circumstances. Further, we make a contribution to the entrepreneurship literature, examining the role that gestation process dynamics play in outcomes, by specifically attributing these to the nature of the venture itself. This extension is to the best of our knowledge new to the research field.
Resumo:
Background: In public health, as well as other health education contexts, there is increasing recognition of the transformation in public health practice and the necessity for educational providers to keep pace. Traditionally, public health education has been at the postgraduate level; however, over the past decade an upsurge in the growth of undergraduate public health degrees has taken place. Discussion: This article explores the impact of these changes on the traditional sphere of Master of Public Health programs, the range of competencies required at undergraduate and postgraduate levels, and the relevance of these changes to the public health workforce. It raises questions about the complexity of educational issues facing tertiary institutions and discusses the implications of these issues on undergraduate and postgraduate programs in public health. Conclusion: The planning and provisioning of education in public health must differentiate between the requirements of undergraduate and postgraduate students – while also addressing the changing needs of the health workforce. Within Australia, although significant research has been undertaken regarding the competencies required by postgraduate public health students, the approach is still somewhat piecemeal, and does not address undergraduate public health. This paper argues for a consistent approach to competencies that describe and differentiate entry-level and advanced practice.
Resumo:
We introduce K-tree in an information retrieval context. It is an efficient approximation of the k-means clustering algorithm. Unlike k-means it forms a hierarchy of clusters. It has been extended to address issues with sparse representations. We compare performance and quality to CLUTO using document collections. The K-tree has a low time complexity that is suitable for large document collections. This tree structure allows for efficient disk based implementations where space requirements exceed that of main memory.
Resumo:
Introduction : For the past decade, three dimensional (3D) culture has served as a foundation for regenerative medicine study. With an increasing awareness of the importance of cell-cell and cell-extracellular matrix interactions which are lacking in 2D culture system, 3D culture system has been employed for many other applications namely cancer research. Through development of various biomaterials and utilization of tissue engineering technology, many in vivo physiological responses are now better understood. The cellular and molecular communication of cancer cells and their microenvironment, for instance can be studied in vitro in 3D culture system without relying on animal models alone. Predilection of prostate cancer (CaP) to bone remains obscure due to the complexity of the mechanisms and lack of proper model for the studies. In this study, we aim to investigate the interaction between CaP cells and osteoblasts simulating the natural bone metastasis. We also further investigate the invasiveness of CaP cells and response of androgen sensitve CaP cells, LNCaP to synthetic androgen.----- Method : Human osteoblast (hOB) scaffolds were prepared by seeding hOB on medical grade polycaprolactone-tricalcium phosphate (mPLC-TCP) scaffolds and induced to produce bone matrix. CaP cell lines namely wild type PC3 (PC3-N), overexpressed prostate specific antigen PC3 (PC3k3s5) and LNCaP were seeded on hOB scaffolds as co-cultures. Morphology of cells was examined by Phalloidin-DAPI and SEM imaging. Gelatin zymography was performed on the 48 hours conditioned media (CM) from co-cultures to determine matrix metalloproteinase (MMP) activity. Gene expression of hOB/LNCaP co-cultures which were treated for 48 hours with 1nM synthetic androgen R1881 were analysed by quantitative real time PCR (qRT-PCR).----- Results : Co-culture of PCC/hOB revealed that the morphology of PCCs on the tissue engineered bone matrix varied from homogenous to heterogenous clusters. Enzymatically inactive pro-MMP2 was detected in CM from hOBs and PCCs cultured on scaffolds. Elevation in MMP9 activity was found only in hOB/PC3N co-culture. hOB/LNCaP co-culture showed increase in expression of key enzymes associated with steroid production which also corresponded to an increase in prostate specific antigen (PSA) and MMP9.----- Conclusions : Upregulation of MMP9 indicates involvement of ECM degradation during cancer invasion and bone metastases. Expression of enzymes involved in CaP progression, PSA, which is not expressed in osteoblasts, demonstrates that crosstalk between PCCs and osteoblasts may play a part in the aggressiveness of CaP. The presence of steroidogenic enzymes, particularly, RDH5, in osteoblasts and stimulated expression in co-culture, may indicate osteoblast production of potent androgens, fuelling cancer cell proliferation. Based on these results, this practical 3D culture system may provide greater understanding into CaP mediated bone metastasis. This allows the role of the CaP/hOB interaction with regards to invasive property and steroidogenesis to be further explored.
Resumo:
“Hardware in the Loop” (HIL) testing is widely used in the automotive industry. The sophisticated electronic control units used for vehicle control are usually tested and evaluated using HIL-simulations. The HIL increases the degree of realistic testing of any system. Moreover, it helps in designing the structure and control of the system under test so that it works effectively in the situations that will be encountered in the system. Due to the size and the complexity of interaction within a power network, most research is based on pure simulation. To validate the performance of physical generator or protection system, most testing is constrained to very simple power network. This research, however, examines a method to test power system hardware within a complex virtual environment using the concept of the HIL. The HIL testing for electronic control units and power systems protection device can be easily performed at signal level. But performance of power systems equipments, such as distributed generation systems can not be evaluated at signal level using HIL testing. The HIL testing for power systems equipments is termed here as ‘Power Network in the Loop’ (PNIL). PNIL testing can only be performed at power level and requires a power amplifier that can amplify the simulation signal to the power level. A power network is divided in two parts. One part represents the Power Network Under Test (PNUT) and the other part represents the rest of the complex network. The complex network is simulated in real time simulator (RTS) while the PNUT is connected to the Voltage Source Converter (VSC) based power amplifier. Two way interaction between the simulator and amplifier is performed using analog to digital (A/D) and digital to analog (D/A) converters. The power amplifier amplifies the current or voltage signal of simulator to the power level and establishes the power level interaction between RTS and PNUT. In the first part of this thesis, design and control of a VSC based power amplifier that can amplify a broadband voltage signal is presented. A new Hybrid Discontinuous Control method is proposed for the amplifier. This amplifier can be used for several power systems applications. In the first part of the thesis, use of this amplifier in DSTATCOM and UPS applications are presented. In the later part of this thesis the solution of network in the loop testing with the help of this amplifier is reported. The experimental setup for PNIL testing is built in the laboratory of Queensland University of Technology and the feasibility of PNIL testing has been evaluated using the experimental studies. In the last section of this thesis a universal load with power regenerative capability is designed. This universal load is used to test the DG system using PNIL concepts. This thesis is composed of published/submitted papers that form the chapters in this dissertation. Each paper has been published or submitted during the period of candidature. Chapter 1 integrates all the papers to provide a coherent view of wide bandwidth switching amplifier and its used in different power systems applications specially for the solution of power systems testing using PNIL.
Resumo:
Introduction Many bilinguals will have had the experience of unintentionally reading something in a language other than the intended one (e.g. MUG to mean mosquito in Dutch rather than a receptacle for a hot drink, as one of the possible intended English meanings), of finding themselves blocked on a word for which many alternatives suggest themselves (but, somewhat annoyingly, not in the right language), of their accent changing when stressed or tired and, occasionally, of starting to speak in a language that is not understood by those around them. These instances where lexical access appears compromised and control over language behavior is reduced hint at the intricate structure of the bilingual lexical architecture and the complexity of the processes by which knowledge is accessed and retrieved. While bilinguals might tend to blame word finding and other language problems on their bilinguality, these difficulties per se are not unique to the bilingual population. However, what is unique, and yet far more common than is appreciated by monolinguals, is the cognitive architecture that subserves bilingual language processing. With bilingualism (and multilingualism) the rule rather than the exception (Grosjean, 1982), this architecture may well be the default structure of the language processing system. As such, it is critical that we understand more fully not only how the processing of more than one language is subserved by the brain, but also how this understanding furthers our knowledge of the cognitive architecture that encapsulates the bilingual mental lexicon. The neurolinguistic approach to bilingualism focuses on determining the manner in which the two (or more) languages are stored in the brain and how they are differentially (or similarly) processed. The underlying assumption is that the acquisition of more than one language requires at the very least a change to or expansion of the existing lexicon, if not the formation of language-specific components, and this is likely to manifest in some way at the physiological level. There are many sources of information, ranging from data on bilingual aphasic patients (Paradis, 1977, 1985, 1997) to lateralization (Vaid, 1983; see Hull & Vaid, 2006, for a review), recordings of event-related potentials (ERPs) (e.g. Ardal et al., 1990; Phillips et al., 2006), and positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies of neurologically intact bilinguals (see Indefrey, 2006; Vaid & Hull, 2002, for reviews). Following the consideration of methodological issues and interpretative limitations that characterize these approaches, the chapter focuses on how the application of these approaches has furthered our understanding of (1) selectivity of bilingual lexical access, (2) distinctions between word types in the bilingual lexicon and (3) control processes that enable language selection.
Resumo:
An important aspect of designing any product is validation. Virtual design process (VDP) is an alternative to hardware prototyping in which analysis of designs can be done without manufacturing physical samples. In recent years, VDP have been generated either for animation or filming applications. This paper proposes a virtual reality design process model on one of the applications when used as a validation tool. This technique is used to generate a complete design guideline and validation tool of product design. To support the design process of a product, a virtual environment and VDP method were developed that supports validation and an initial design cycle performed by a designer. The product model car carrier is used as illustration for which virtual design was generated. The loading and unloading sequence of the model for the prototype was generated using automated reasoning techniques and was completed by interactively animating the product in the virtual environment before complete design was built. By using the VDP process critical issues like loading, unloading, Australian Design rules (ADR) and clearance analysis were done. The process would save time, money in physical sampling and to large extent in complete math generation. Since only schematic models are required, it saves time in math modelling and handling of bigger size assemblies due to complexity of the models. This extension of VDP process for design evaluation is unique and was developed, implemented successfully. In this paper a Toll logistics and J Smith and Sons car carrier which is developed under author’s responsibility has been used to illustrate our approach of generating design validation via VDP.
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
This document provides the findings of an international review of investment decision-making practices in road asset management. Efforts were concentrated on identifying the strategic objectives of agencies in road asset management, establishing and understanding criteria different organisations adopted and ascertaining the exact methodologies used by different countries and international organisations. Road assets are powerful drivers of economic development and social equity. They also have significant impacts on the natural and man-made environment. The traditional definition of asset management is “A systematic process of maintaining, upgrading and operating physical assets cost effectively. It combines engineering principles with sound business practices and economic theory and it provides tools to facilitate a more organised, logical approach to decision-making” (US Dept. of Transportation, 1999). In recent years, the concept has been broadened to cover the complexity of decision making, based on a wider variety of policy considerations as well as social and environmental issues rather than is covered by Benefit-Cost analysis and pure technical considerations. Current international practices are summarised in table 2. It was evident that Engineering-economic analysis methods are well advanced to support decision-making. A range of tools available supports performance predicting of road assets and associated cost/benefit in technical context. The need for considering triple plus one bottom line of social, environmental and economic as well as political factors in decision-making is well understood by road agencies around the world. The techniques used to incorporate these however, are limited. Most countries adopt a scoring method, a goal achievement matrix or information collected from surveys. The greater uncertainty associated with these non-quantitative factors has generally not been taken into consideration. There is a gap between the capacities of the decision-making support systems and the requirements from decision-makers to make more rational and transparent decisions. The challenges faced in developing an integrated decision making framework are both procedural and conceptual. In operational terms, the framework should be easy to be understood and employed. In philosophical terms, the framework should be able to deal with challenging issues, such as uncertainty, time frame, network effects, model changes, while integrating cost and non-cost values into the evaluation. The choice of evaluation techniques depends on the feature of the problem at hand, on the aims of the analysis, and on the underlying information base At different management levels, the complexity in considering social, environmental, economic and political factor in decision-making is different. At higher the strategic planning level, more non-cost factors are involved. The complexity also varies based on the scope of the investment proposals. Road agencies traditionally place less emphasis on evaluation of maintenance works. In some cases, social equity, safety, environmental issues have been used in maintenance project selection. However, there is not a common base for the applications.
Resumo:
Risks and uncertainties are inevitable in engineering projects and infrastructure investments. Decisions about investment in infrastructure such as for maintenance, rehabilitation and construction works can pose risks, and may generate significant impacts on social, cultural, environmental and other related issues. This report presents the results of a literature review of current practice in identifying, quantifying and managing risks and predicting impacts as part of the planning and assessment process for infrastructure investment proposals. In assessing proposals for investment in infrastructure, it is necessary to consider social, cultural and environmental risks and impacts to the overall community, as well as financial risks to the investor. The report defines and explains the concept of risk and uncertainty, and describes the three main methodology approaches to the analysis of risk and uncertainty in investment planning for infrastructure, viz examining a range of scenarios or options, sensitivity analysis, and a statistical probability approach, listed here in order of increasing merit and complexity. Forecasts of costs, benefits and community impacts of infrastructure are recognised as central aspects of developing and assessing investment proposals. Increasingly complex modelling techniques are being used for investment evaluation. The literature review identified forecasting errors as the major cause of risk. The report contains a summary of the broad nature of decision-making tools used by governments and other organisations in Australia, New Zealand, Europe and North America, and shows their overall approach to risk assessment in assessing public infrastructure proposals. While there are established techniques to quantify financial and economic risks, quantification is far less developed for political, social and environmental risks and impacts. The report contains a summary of the broad nature of decision-making tools used by governments and other organisations in Australia, New Zealand, Europe and North America, and shows their overall approach to risk assessment in assessing public infrastructure proposals. While there are established techniques to quantify financial and economic risks, quantification is far less developed for political, social and environmental risks and impacts. For risks that cannot be readily quantified, assessment techniques commonly include classification or rating systems for likelihood and consequence. The report outlines the system used by the Australian Defence Organisation and in the Australian Standard on risk management. After each risk is identified and quantified or rated, consideration can be given to reducing the risk, and managing any remaining risk as part of the scope of the project. The literature review identified use of risk mapping techniques by a North American chemical company and by the Australian Defence Organisation. This literature review has enabled a risk assessment strategy to be developed, and will underpin an examination of the feasibility of developing a risk assessment capability using a probability approach.
Resumo:
The following technical report describes the approach and algorithm used to detect marine mammals from aerial imagery taken from manned/unmanned platform. The aim is to automate the process of counting the population of dugongs and other mammals. We have developed and algorithm that automatically presents to a user a number of possible candidates of these mammals. We tested the algorithm in two distinct datasets taken from different altitudes. Analysis and discussion is presented in regards with the complexity of the input datasets, the detection performance.
Resumo:
Digital forensics relates to the investigation of a crime or other suspect behaviour using digital evidence. Previous work has dealt with the forensic reconstruction of computer-based activity on single hosts, but with the additional complexity involved with a distributed environment, a Web services-centric approach is required. A framework for this type of forensic examination needs to allow for the reconstruction of transactions spanning multiple hosts, platforms and applications. A tool implementing such an approach could be used by an investigator to identify scenarios of Web services being misused, exploited, or otherwise compromised. This information could be used to redesign Web services in order to mitigate identified risks. This paper explores the requirements of a framework for performing effective forensic examinations in a Web services environment. This framework will be necessary in order to develop forensic tools and techniques for use in service oriented architectures.
Resumo:
Policy instruments of education, regulation, fines and inspection have all been utilised by Australian jurisdictions as they attempt to improve the poor performance of occupational health and safety (OH&S) in the construction industry. However, such policy frameworks have been largely uncoordinated across Australia, resulting in differing policy systems, with differing requirements and compliance systems. Such complexity, particularly for construction firms operating across jurisdictional borders, led to various attempts to improve the consistency of OH&S regulation across Australia, four of which will be reviewed in this report. 1. The first is the Occupational Health and Safety Act 1991 (Commonwealth) which enabled certain organisations to opt out of state based regulatory regimes. 2. The second is the development of national standards, codes of practice and guidance documents by the National Occupational Health and Safety Council (NOHSC). The intent was that the OHS requirements, principles and practices contained in these documents would be adopted by state and territory governments into their legislation and policy, thereby promoting regulatory consistency across Australia. 3. The third is the attachment of conditions to special purpose payments from the Commonwealth to the States, in the form of OH&S accreditation with the Office of the Federal Safety Commissioner. 4. The fourth is the development of national voluntary codes of OHS practice for the construction industry. It is interesting to note that the tempo of change has increased significantly since 2003, with the release of the findings of the Cole Royal Commission. This paper examines and evaluates each of these attempts to promote consistency across Australia. It concludes that while there is a high level of information sharing between jurisdictions, particularly from the NOSHC standards, a fragmented OH&S policy framework still remains in place across Australia. The utility of emergent industry initiatives such as voluntary codes and guidelines for safer construction practices to enhance consistency are discussed.