48 resultados para ISM: clouds
em Queensland University of Technology - ePrints Archive
Resumo:
Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].
Resumo:
BreastScreen Queensland (BSQ) is a government-based health service that provides free breast cancer screening services to eligible women using digital mammography technology.' In 2007, BSQ launched its first social marketing campaign' aimed at achieving a 30 per cent increase in women's programme participation by addressing the barriers to regular screening and by dispelling myths about breast cancer (Tornabene 2010). 'The Facts' mass media social marketing campaign used a credible spokesperson, Australian journalist]ana Wendt, to deliver the call to action' Don't make excuses. Make an appointment'.
Resumo:
Reconstructing 3D motion data is highly under-constrained due to several common sources of data loss during measurement, such as projection, occlusion, or miscorrespondence. We present a statistical model of 3D motion data, based on the Kronecker structure of the spatiotemporal covariance of natural motion, as a prior on 3D motion. This prior is expressed as a matrix normal distribution, composed of separable and compact row and column covariances. We relate the marginals of the distribution to the shape, trajectory, and shape-trajectory models of prior art. When the marginal shape distribution is not available from training data, we show how placing a hierarchical prior over shapes results in a convex MAP solution in terms of the trace-norm. The matrix normal distribution, fit to a single sequence, outperforms state-of-the-art methods at reconstructing 3D motion data in the presence of significant data loss, while providing covariance estimates of the imputed points.
Resumo:
This paper reports on the early stages of a design experiment in educational assessment that challenges the dichotomous legacy evident in many assessment activities. Combining social networking technologies with the sociology of education the paper proposes that assessment activities are best understood as a negotiable field of exchange. In this design experiment students, peers and experts engage in explicit, "front-end" assessment (Wyatt-Smith, 2008) to translate holistic judgments into institutional, and potentiality economic capital without adhering to long lists of pre-set criteria. This approach invites participants to use social networking technologies to judge creative works using scatter graphs, keywords and tag clouds. In doing so assessors will refine their evaluative expertise and negotiate the characteristics of creative works from which criteria will emerge (Sadler, 2008). The real-time advantages of web-based technologies will aggregate, externalise and democratise this transparent method of assessment for most, if not all, creative works that can be represented in a digital format.
Resumo:
Clearly the world is a different place to what it was 40 years ago, and much of that difference can be characterised as disturbances to the local on the basis of globalism, particularly due to changes in communication and information technology. Like it did to modernism before it, this societal change calls for, or more aptly calls to, designers to reformulate their practices to reflect this significant new paradigm. This is a rationale that has driven much avant-garde activity in the 20th century, and in this case, 'landscape urbanism' in the 21st. In the case of this discussion, it is important to recognise the avant-garde cycle at work in the development of the discipline, not only to contextualise its production, but so that its greatest values can be welcomed: despite the propaganda and arrogance, important revisions occurred to the canons after all the -isms. That said, I do find myself asking: do we need another -ism?
Resumo:
An examination of Information Security (IS) and Information Security Management (ISM) research in Saudi Arabia has shown the need for more rigorous studies focusing on the implementation and adoption processes involved with IS culture and practices. Overall, there is a lack of academic and professional literature about ISM and more specifically IS culture in Saudi Arabia. Therefore, the overall aim of this paper is to identify issues and factors that assist the implementation and the adoption of IS culture and practices within the Saudi environment. The goal of this paper is to identify the important conditions for creating an information security culture in Saudi Arabian organizations. We plan to use this framework to investigate whether security culture has emerged into practices in Saudi Arabian organizations.
Resumo:
Professional prac− tice guidelines for endoscope reprocessing re− commend reprocessing endoscopes between each case and proper storage following repro− cessing after the last case of the list. There is lim− ited empirical evidence to support the efficacy of endoscope reprocessing prior to use in the first case of the day; however, internationally, many guidelines continue to recommend this practice. The aim of this study is to estimate a safe shelf life for flexible endoscopes in a high−turnover gastroenterology unit. Materials and methods: In a prospective obser− vational study, all flexible endoscopes in active service during the 3−week study period were mi− crobiologically sampled prior to reprocessing be− fore the first case of the day (n = 200). The main outcome variables were culture status, organism cultured, and shelf life. Results: Among the total number of useable samples (n = 194), the overall contamination rate was 15.5 %, with a pathogenic contamination rate of 0.5 %. Mean time between last case one day and reprocessing before the first case on the next day (that is, shelf life) was 37.62 h (SD 36.47). Median shelf life was 18.8 h (range 5.27± 165.35 h). The most frequently identified organ− ism was coagulase−negative Staphylococcus, an environmental nonpathogenic organism. Conclusions: When processed according to es− tablished guidelines, flexible endoscopes remain free from pathogenic organisms between last case and next day first case use. Significant re− ductions in the expenditure of time and resources on reprocessing endoscopes have the potential to reduce the restraints experienced by high−turnover endoscopy units and improve ser− vice delivery.
Resumo:
One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine.
Resumo:
Purpose : Effective flow of data and communication at every stage of a construction project is essential for achieving required coordination and collaboration between the project participants, leading to successful management of the projects. In present scenario, when project participants are geographically separated, adoption of information communication technology (ICT) enables such effective communication. Thus, the purpose of this paper is to focus on ICT adoption for building project management.---------- Design/methodology/approach : It is difficult to quantitatively evaluate the benefits of ICT adoption in the multiple enterprise scenario of building project management. It requires qualitative analysis based on the perceptions of the construction professionals. The paper utilizes interpretive structural modeling (ISM) technique to assess importance of perceived benefits and their driving power and dependence on other benefits.---------- Findings : The developed ISM model shows that all the categories of benefits, i.e. benefits related to projects, team management, technology, and organization are inter-related and cannot be achieved in isolation. But, organization- and technology-related benefits have high-driving power and these are “strategic benefits” for the project team organizations. Thus, organizations are required to give more attention on strategically increasing these benefits from application of ICT. Originality/value – This analysis provides a road map to managers or project management organizations to decide that if they are planning ICT adoption for achieving certain benefits then which are the other driving benefits that should be achieved prior to that and also which are the dependent benefits that would be achieved by default.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
This work investigates the computer modelling of the photochemical formation of smog products such as ozone and aerosol, in a system containing toluene, NOx and water vapour. In particular, the problem of modelling this process in the Commonwealth Scientific and Industrial Research Organization (CSIRO) smog chambers, which utilize outdoor exposure, is addressed. The primary requirement for such modelling is a knowledge of the photolytic rate coefficients. Photolytic rate coefficients of species other than N02 are often related to JNo2 (rate coefficient for the photolysis ofN02) by a simple factor, but for outdoor chambers, this method is prone to error as the diurnal profiles may not be similar in shape. Three methods for the calculation of diurnal JNo2 are investigated. The most suitable method for incorporation into a general model, is found to be one which determines the photolytic rate coefficients for N02, as well as several other species, from actinic flux, absorption cross section and quantum yields. A computer model was developed, based on this method, to calculate in-chamber photolysis rate coefficients for the CSIRO smog chambers, in which ex-chamber rate coefficients are adjusted by accounting for variation in light intensity by transmittance through the Teflon walls, albedo from the chamber floor and radiation attenuation due to clouds. The photochemical formation of secondary aerosol is investigated in a series of toluene-NOx experiments, which were performed in the CSIRO smog chambers. Three stages of aerosol formation, in plots of total particulate volume versus time, are identified: a delay period in which no significant mass of aerosol is formed, a regime of rapid aerosol formation (regime 1) and a second regime of slowed aerosol formation (regime 2). Two models are presented which were developed from the experimental data. One model is empirically based on observations of discrete stages of aerosol formation and readily allows aerosol growth profiles to be calculated. The second model is based on an adaptation of published toluene photooxidation mechanisms and provides some chemical information about the oxidation products. Both models compare favorably against the experimental data. The gross effects of precursor concentrations (toluene, NOx and H20) and ambient conditions (temperature, photolysis rate) on the formation of secondary aerosol are also investigated, primarily using the mechanism model. An increase in [NOx]o results in increased delay time, rate of aerosol formation in regime 1 and volume of aerosol formed in regime 1. This is due to increased formation of dinitrocresol and furanone products. An increase in toluene results in a decrease in the delay time and an increase in the rate of aerosol formation in regime 1, due to enhanced reactivity from the toluene products, such as the radicals from the photolysis of benzaldehyde. Water vapor has very little effect on the formation of aerosol volume, except that rates are slightly increased due to more OH radicals from reaction with 0(1D) from ozone photolysis. Increased temperature results in increased volume of aerosol formed in regime 1 (increased dinitrocresol formation), while increased photolysis rate results in increased rate of aerosol formation in regime 1. Both the rate and volume of aerosol formed in regime 2 are increased by increased temperature or photolysis rate. Both models indicate that the yield of secondary particulates from hydrocarbons (mass concentration aerosol formed/mass concentration hydrocarbon precursor) is proportional to the ratio [NOx]0/[hydrocarbon]0
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
All organisations, irrespective of size and type, need effective information security management (ISM) practices to protect vital organisational in- formation assets. However, little is known about the information security management practices of nonprofit organisations. Australian nonprofit organisations (NPOs) employed 889,900 people, managed 4.6 million volunteers and contributed $40,959 million to the economy during 2006-2007 (Australian Bureau of Statistics, 2009). This thesis describes the perceptions of information security management in two Australian NPOs and examines the appropriateness of the ISO 27002 information security management standard in an NPO context. The overall approach to the research is interpretive. A collective case study has been performed, consisting of two instrumental case studies with the researcher being embedded within two NPOs for extended periods of time. Data gathering and analysis was informed by grounded theory and action research, and the Technology Acceptance Model was utilised as a lens to explore the findings and provide limited generalisability to other contexts. The major findings include a distinct lack of information security management best practice in both organisations. ISM Governance and risk management was lacking and ISM policy was either outdated or non- existent. While some user focused ISM practices were evident, reference to standards, such as ISO 27002, were absent. The main factor that negatively impacted on ISM practices was the lack of resources available for ISM in the NPOs studied. Two novel aspects of information security dis- covered in this research were the importance of accuracy and consistency of information. The contribution of this research is a preliminary understanding of ISM practices and perceptions in NPOs. Recommendations for a new approach to managing information security management in nonprofit organisations have been proposed.