893 resultados para Stand-alone
Resumo:
The health system is one sector dealing with a deluge of complex data. Many healthcare organisations struggle to utilise these volumes of health data effectively and efficiently. Also, there are many healthcare organisations, which still have stand-alone systems, not integrated for management of information and decision-making. This shows, there is a need for an effective system to capture, collate and distribute this health data. Therefore, implementing the data warehouse concept in healthcare is potentially one of the solutions to integrate health data. Data warehousing has been used to support business intelligence and decision-making in many other sectors such as the engineering, defence and retail sectors. The research problem that is going to be addressed is, "how can data warehousing assist the decision-making process in healthcare". To address this problem the researcher has narrowed an investigation focusing on a cardiac surgery unit. This research used the cardiac surgery unit at the Prince Charles Hospital (TPCH) as the case study. The cardiac surgery unit at TPCH uses a stand-alone database of patient clinical data, which supports clinical audit, service management and research functions. However, much of the time, the interaction between the cardiac surgery unit information system with other units is minimal. There is a limited and basic two-way interaction with other clinical and administrative databases at TPCH which support decision-making processes. The aims of this research are to investigate what decision-making issues are faced by the healthcare professionals with the current information systems and how decision-making might be improved within this healthcare setting by implementing an aligned data warehouse model or models. As a part of the research the researcher will propose and develop a suitable data warehouse prototype based on the cardiac surgery unit needs and integrating the Intensive Care Unit database, Clinical Costing unit database (Transition II) and Quality and Safety unit database [electronic discharge summary (e-DS)]. The goal is to improve the current decision-making processes. The main objectives of this research are to improve access to integrated clinical and financial data, providing potentially better information for decision-making for both improved from the questionnaire and by referring to the literature, the results indicate a centralised data warehouse model for the cardiac surgery unit at this stage. A centralised data warehouse model addresses current needs and can also be upgraded to an enterprise wide warehouse model or federated data warehouse model as discussed in the many consulted publications. The data warehouse prototype was able to be developed using SAS enterprise data integration studio 4.2 and the data was analysed using SAS enterprise edition 4.3. In the final stage, the data warehouse prototype was evaluated by collecting feedback from the end users. This was achieved by using output created from the data warehouse prototype as examples of the data desired and possible in a data warehouse environment. According to the feedback collected from the end users, implementation of a data warehouse was seen to be a useful tool to inform management options, provide a more complete representation of factors related to a decision scenario and potentially reduce information product development time. However, there are many constraints exist in this research. For example the technical issues such as data incompatibilities, integration of the cardiac surgery database and e-DS database servers and also, Queensland Health information restrictions (Queensland Health information related policies, patient data confidentiality and ethics requirements), limited availability of support from IT technical staff and time restrictions. These factors have influenced the process for the warehouse model development, necessitating an incremental approach. This highlights the presence of many practical barriers to data warehousing and integration at the clinical service level. Limitations included the use of a small convenience sample of survey respondents, and a single site case report study design. As mentioned previously, the proposed data warehouse is a prototype and was developed using only four database repositories. Despite this constraint, the research demonstrates that by implementing a data warehouse at the service level, decision-making is supported and data quality issues related to access and availability can be reduced, providing many benefits. Output reports produced from the data warehouse prototype demonstrated usefulness for the improvement of decision-making in the management of clinical services, and quality and safety monitoring for better clinical care. However, in the future, the centralised model selected can be upgraded to an enterprise wide architecture by integrating with additional hospital units’ databases.
Resumo:
Virtual prototyping emerges as a new technology to replace existing physical prototypes for product evaluation, which are costly and time consuming to manufacture. Virtualization technology allows engineers and ergonomists to perform virtual builds and different ergonomic analyses on a product. Digital Human Modelling (DHM) software packages such as Siemens Jack, often integrate with CAD systems to provide a virtual environment which allows investigation of operator and product compatibility. Although the integration between DHM and CAD systems allows for the ergonomic analysis of anthropometric design, human musculoskeletal, multi-body modelling software packages such as the AnyBody Modelling System (AMS) are required to support physiologic design. They provide muscular force analysis, estimate human musculoskeletal strain and help address human comfort assessment. However, the independent characteristics of the modelling systems Jack and AMS constrain engineers and ergonomists in conducting a complete ergonomic analysis. AMS is a stand alone programming system without a capability to integrate into CAD environments. Jack is providing CAD integrated human-in-the-loop capability, but without considering musculoskeletal activity. Consequently, engineers and ergonomists need to perform many redundant tasks during product and process design. Besides, the existing biomechanical model in AMS uses a simplified estimation of body proportions, based on a segment mass ratio derived scaling approach. This is insufficient to represent user populations anthropometrically correct in AMS. In addition, sub-models are derived from different sources of morphologic data and are therefore anthropometrically inconsistent. Therefore, an interface between the biomechanical AMS and the virtual human model Jack was developed to integrate a musculoskeletal simulation with Jack posture modeling. This interface provides direct data exchange between the two man-models, based on a consistent data structure and common body model. The study assesses kinematic and biomechanical model characteristics of Jack and AMS, and defines an appropriate biomechanical model. The information content for interfacing the two systems is defined and a protocol is identified. The interface program is developed and implemented through Tcl and Jack-script(Python), and interacts with the AMS console application to operate AMS procedures.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
It is nearly 10 years since the introduction of s 299(1)(f) Corporations Act , which requires the disclosure of information regarding a company's environmental performance within its annual report. This provision has generated considerable debate in the years since its introduction, fundamentally between proponents of either a voluntary or mandatory environmental reporting framework. This study examines the adequacy of the current regulatory framework. The environmental reporting practices of 24 listed companies in the resources industries are assessed relative to a standard set by the Global Reporting Initiative (GRI) Sustainability Reporting Guidelines. These Guidelines are argued to represent "international best practice" in environmental reporting and a "scorecard" approach is used to score the quality of disclosure according to this voluntary benchmark. Larger companies in the sample tend to report environmental information over and above the level required by legislation. Some, but not all companies present a stand-alone environmental/sustainability report. However, smaller companies provide minimal information in compliance with s 299(1)(f) . The findings indicate that "international best practice" environmental reporting is unlikely to be achieved by Australian companies under the current regulatory framework. In the current regulatory environment that scrutinises s 299(1)(f) , this article provides some preliminary evidence of the quality of disclosures generated in the Australian market.
Resumo:
This work focuses on the development of a stand-alone gas nanosensor node, powered by solar energy to track concentration of polluted gases such as NO2, N2O, and NH3. Gas sensor networks have been widely developed over recent years, but the rise of nanotechnology is allowing the creation of a new range of gas sensors [1] with higher performance, smaller size and an inexpensive manufacturing process. This work has created a gas nanosensor node prototype to evaluate future field performance of this new generation of sensors. The sensor node has four main parts: (i) solar cells; (ii) control electronics; (iii) gas sensor and sensor board interface [2-4]; and (iv) data transmission. The station is remotely monitored through wired (ethernet cable) or wireless connection (radio transmitter) [5, 6] in order to evaluate, in real time, the performance of the solar cells and sensor node under different weather conditions. The energy source of the node is a module of polycrystalline silicon solar cells with 410cm2 of active surface. The prototype is equipped with a Resistance-To-Period circuit [2-4] to measure the wide range of resistances (KΩ to GΩ) from the sensor in a simple and accurate way. The system shows high performance on (i) managing the energy from the solar panel, (ii) powering the system load and (iii) recharging the battery. The results show that the prototype is suitable to work with any kind of resistive gas nanosensor and provide useful data for future nanosensor networks.
Resumo:
Australia requires decisive action on climate change and issues of sustainability. The Urban Informatics Research Lab has been funded by the Queensland State Government to conduct a three year study (2009 – 2011) exploring ways to support Queensland residents in making more sustainable consumer and lifestyle choices. We conduct user-centred design research that inform the development of real-time, mobile, locational, networked information interfaces, feedback mechanisms and persuasive and motivational approaches that in turn assist in-situ decision making and environmental awareness in everyday settings. The study aims to deliver usable and useful prototypes offering individual and collective visualisations of ecological impact and opportunities for engagement and collaboration in order to foster a participatory and sustainable culture of life in Australia. Raising people’s awareness with environmental data and educational information does not necessarily trigger sufficient motivation to change their habits towards a more environmentally friendly and sustainable lifestyle. Our research seeks to develop a better understanding how to go beyond just informing and into motivating and encouraging action and change. Drawing on participatory culture, ubiquitous computing, and real-time information, the study delivers research that leads to viable new design approaches and information interfaces which will strengthen Australia’s position to meet the targets of the Clean Energy Future strategy, and contribute to the sustainability of a low-carbon future in Australia. As part of this program of research, the Urban Informatics Research Lab has been invited to partner with GV Community Energy Pty Ltd on a project funded by the Victorian Government Sustainability Fund. This feasibility report specifically looks at the challenges and opportunities of energy monitoring in households in Victoria that include a PV solar installation. The report is structured into two parts: In Part 1, we first review a range of energy monitoring solutions, both stand-alone and internet-enabled. This section primarily focusses on the technical capacilities. However, in order to understand this information and make an informed decision, it is crucial to understand the basic principles and limitations of energy monitoring as well as the opportunities and challenges of a networked approach towards energy monitoring which are discussed in Section 2.
Resumo:
It is certain that there will be changes in environmental conditions across the globe as a result of climate change. Such changes will require the building of biological, human and infrastructure resilience. In some instances the building of such resilience will be insufficient to deal with extreme changes in environmental conditions and legal frameworks will be required to provide recognition and support for people dislocated because of environmental change. Such dislocation may occur internally within the country of original origin or externally into another State’s territory. International and national legal frameworks do not currently recognise or assist people displaced as a result of environmental factors including displacement occurring as a result of climate change. Legal frameworks developed to deal with this issue will need to consider the legal rights of those people displaced and the legal responsibilities of those countries required to respond to such displacement. The objective of this article is to identify the most suitable international institution to host a program addressing climate displacement. There are a number of areas of international law that are relevant to climate displacement, including refugee law, human rights law and international environmental law. These regimes, however, were not designed to protect people relocating as a result of environmental change. As such, while they indirectly may be of relevance to climate displacement, they currently do nothing to directly address this complex issue. In order to determine the most appropriate institution to address and regulate climate displacement, it is imperative to consider issues of governance. This paper seeks to examine this issue and determine whether it is preferable to place climate displacement programs into existing international legal frameworks or whether it is necessary to regulate this area in an entirely new institution specifically designed to deal with the complex and cross-cutting issues surrounding the topic. Commentators in this area have proposed three different regulatory models for addressing climate displacement. These models include: (a) Expand the definition of refugee under the Refugee Convention to encompass persons displaced by climate change; (b) Implement a new stand alone Climate Displacement Convention; and (c) Implement a Climate Displacement Protocol to the UNFCCC. This article will examine each of these proposed models against a number of criteria to determine the model that is most likely to address the needs and requirements of people displaced by climate change. It will also identify the model that is likely to be most politically acceptable and realistic for those countries likely to attract responsibilities by its implementation. In order to assess whether the rights and needs of the people to be displaced are to be met, theories of procedural, distributive and remedial justice will be used to consider the equity of the proposed schemes. In order to consider the most politically palatable and realistic scheme, reference will be made to previous state practice and compliance with existing obligations in the area. It is suggested that the criteria identified by this article should underpin any future climate displacement instrument.
Resumo:
Securing IT infrastructures of our modern lives is a challenging task because of their increasing complexity, scale and agile nature. Monolithic approaches such as using stand-alone firewalls and IDS devices for protecting the perimeter cannot cope with complex malwares and multistep attacks. Collaborative security emerges as a promising approach. But, research results in collaborative security are not mature, yet, and they require continuous evaluation and testing. In this work, we present CIDE, a Collaborative Intrusion Detection Extension for the network security simulation platform ( NeSSi 2 ). Built-in functionalities include dynamic group formation based on node preferences, group-internal communication, group management and an approach for handling the infection process for malware-based attacks. The CIDE simulation environment provides functionalities for easy implementation of collaborating nodes in large-scale setups. We evaluate the group communication mechanism on the one hand and provide a case study and evaluate our collaborative security evaluation platform in a signature exchange scenario on the other.
Resumo:
Despite the ubiquitous nature of the discourse on human rights there is currently little research on the emergence of disclosure by multinational corporations on their human rights obligations or the regulatory dynamic that may lie behind this trend. In an attempt to begin to explore the extent to which, if any, the language of human rights has entered the discourse of corporate accountability, this paper investigates the adoption of the International Labour Organisation's (ILO) human rights standards by major multinational garment retail companies that source products from developing countries, as disclosed through their reporting media. The paper has three objectives. Firstly, to empirically explore the extent to which a group of multinational garment retailers invoke the language of human rights when disclosing their corporate responsibilities. The paper reviews corporate reporting media including social responsibility codes of conduct, annual reports and stand-alone social responsibility reports released by 18 major global clothing and retail companies during a period from 1990 to 2007. We find that the number of companies adopting and disclosing on the ILO's workplace human rights standards has significantly increased since 1998 – the year in which the ILO's standards were endorsed and accepted by the global community (ILO, 1998). Secondly, drawing on a combination of Responsive Regulation theory and neo-institutional theory, we tentatively seek to understand the regulatory space that may have influenced these large corporations to adopt the language of human rights obligations. In particular, we study the role that International Governmental Organisation's (IGO) such as ILO may have played in these disclosures. Finally, we provide some critical reflections on the power and potential within the corporate adoption of the language of human rights.
Resumo:
Purpose – The purpose of this paper is to examine the environmental disclosure initiatives of Niko Resources Ltd – a Canada-based multinational oil and gas company – following the two major environmental blowouts at a gas field in Bangladesh in 2005. As part of the examination, the authors particularly focus on whether Niko's disclosure strategy was associated with public concern pertaining to the blowouts. Design/methodology/approach – The authors reviewed news articles about Niko's environmental incidents in Bangladesh and Niko's communication media, including annual reports, press releases and stand-alone social responsibility report over the period 2004-2007, to understand whether news media attention as proxy for public concern has an impact on Niko's disclosure practices in relation to the affected local community in Bangladesh. Findings – The findings show that Niko did not provide any non-financial environmental information within its annual reports and press releases as a part of its responsibility to the local community which was affected by the blowouts, but it did produce a stand-alone report to address the issue. However, financial environmental disclosures, such as the environmental contingent liability disclosure, were adequately provided through annual reports to meet the regulatory requirements concerning environmental persecutions. The findings also suggest that Niko's non-financial disclosure within a stand-alone report was associated with the public pressures as measured by negative media coverage towards the Niko blowouts. Research limitations/implications – This paper concludes that the motive for Niko's non-financial environmental disclosure, via a stand-alone report, reflected survival considerations: the company's reaction did not suggest any real attempt to hold broader accountability for its activities in a developing country.
Resumo:
Management of groundwater systems requires realistic conceptual hydrogeological models as a framework for numerical simulation modelling, but also for system understanding and communicating this to stakeholders and the broader community. To help overcome these challenges we developed GVS (Groundwater Visualisation System), a stand-alone desktop software package that uses interactive 3D visualisation and animation techniques. The goal was a user-friendly groundwater management tool that could support a range of existing real-world and pre-processed data, both surface and subsurface, including geology and various types of temporal hydrological information. GVS allows these data to be integrated into a single conceptual hydrogeological model. In addition, 3D geological models produced externally using other software packages, can readily be imported into GVS models, as can outputs of simulations (e.g. piezometric surfaces) produced by software such as MODFLOW or FEFLOW. Boreholes can be integrated, showing any down-hole data and properties, including screen information, intersected geology, water level data and water chemistry. Animation is used to display spatial and temporal changes, with time-series data such as rainfall, standing water levels and electrical conductivity, displaying dynamic processes. Time and space variations can be presented using a range of contouring and colour mapping techniques, in addition to interactive plots of time-series parameters. Other types of data, for example, demographics and cultural information, can also be readily incorporated. The GVS software can execute on a standard Windows or Linux-based PC with a minimum of 2 GB RAM, and the model output is easy and inexpensive to distribute, by download or via USB/DVD/CD. Example models are described here for three groundwater systems in Queensland, northeastern Australia: two unconfined alluvial groundwater systems with intensive irrigation, the Lockyer Valley and the upper Condamine Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The “virtual” groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2–5 min) based on sequences of camera ‘fly-throughs’ and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.
Resumo:
Vehicular accidents are one of the deadliest safety hazards and accordingly an immense concern of individuals and governments. Although, a wide range of active autonomous safety systems, such as advanced driving assistance and lane keeping support, are introduced to facilitate safer driving experience, these stand-alone systems have limited capabilities in providing safety. Therefore, cooperative vehicular systems were proposed to fulfill more safety requirements. Most cooperative vehicle-to-vehicle safety applications require relative positioning accuracy of decimeter level with an update rate of at least 10 Hz. These requirements cannot be met via direct navigation or differential positioning techniques. This paper studies a cooperative vehicle platform that aims to facilitate real-time relative positioning (RRP) among adjacent vehicles. The developed system is capable of exchanging both GPS position solutions and raw observations using RTCM-104 format over vehicular dedicated short range communication (DSRC) links. Real-time kinematic (RTK) positioning technique is integrated into the system to enable RRP to be served as an embedded real-time warning system. The 5.9 GHz DSRC technology is adopted as the communication channel among road-side units (RSUs) and on-board units (OBUs) to distribute GPS corrections data received from a nearby reference station via the Internet using cellular technologies, by means of RSUs, as well as to exchange the vehicular real-time GPS raw observation data. Ultimately, each receiving vehicle calculates relative positions of its neighbors to attain a RRP map. A series of real-world data collection experiments was conducted to explore the synergies of both DSRC and positioning systems. The results demonstrate a significant enhancement in precision and availability of relative positioning at mobile vehicles.
Resumo:
The Australian legal profession, as well as the content and pedagogy of legal education across Australia, are steeped in tradition and conservatism. Indeed, the legal profession and our institutions of legal education are in a relationship of mutual influence which leaves the way we teach law resistant to change. There has traditionally been pushback against the notion that dispute resolution should have a place amongst black letter law subjects in the legal curriculum. This paper argues that this position cannot be maintained in the modern legal climate. We challenge legal education orthodoxy and promote NADRAC’s position that alternative dispute resolution should be a compulsory, stand alone subject in the law degree. We put forward ten simple arguments as to why every law student should be exposed to a semester long course of DR instruction.
Resumo:
The profession of law is deeply steeped in tradition and conservatism. The content and pedagogy employed in law faculties across Australia is similarly steeped in tradition and conservatism. Indeed, the practice of law and our institutions of legal education are in a relationship of mutual influence; a dénouement which preserves the best aspects of our common law legal system, but also leaves the way we educate, practice, and think about the role of law, resistant to change. In this article, we lay down a challenge to legal education orthodoxy and a call to arms for legal academic progressivists. It is our simple argument that alternative dispute resolution should be a compulsory, stand alone subject in the law degree. There has been traditional pushback against the notion that alternative dispute resolution should have a place amongst black letter law subjects in the legal curriculum. This position cannot be maintained in the modern day legal climate. We put forward ten simple arguments as to why every law student should be exposed to a semester long course of ADR instruction. With respect to relationships of mutual influence, whether legal education should assimilate the practise of law, or shape the practise of law makes no difference here. Both views necessitate the inclusion of ADR as a compulsory subject in the law degree.
Resumo:
The notion of plaintext awareness ( PA ) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen-ciphertext attacks ( IND-CCA ), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving PA in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving PA of schemes in one model cannot be adapted to the other model. Existing research addresses PA in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of PA in proof strategies of IND-CCA security, and explores relationships between PA and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and IND-CCA -secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of γ -uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs), this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is PA and IND-CCA -secure by applying our construction to a popular IB-KEM and optimizing it for better performance.