882 resultados para On-site Systems
Resumo:
Major food adulteration and contamination events occur with alarming regularity and are known to be episodic, with the question being not if but when another large-scale food safety/integrity incident will occur. Indeed, the challenges of maintaining food security are now internationally recognised. The ever increasing scale and complexity of food supply networks can lead to them becoming significantly more vulnerable to fraud and contamination, and potentially dysfunctional. This can make the task of deciding which analytical methods are more suitable to collect and analyse (bio)chemical data within complex food supply chains, at targeted points of vulnerability, that much more challenging. It is evident that those working within and associated with the food industry are seeking rapid, user-friendly methods to detect food fraud and contamination, and rapid/high-throughput screening methods for the analysis of food in general. In addition to being robust and reproducible, these methods should be portable and ideally handheld and/or remote sensor devices, that can be taken to or be positioned on/at-line at points of vulnerability along complex food supply networks and require a minimum amount of background training to acquire information rich data rapidly (ergo point-and-shoot). Here we briefly discuss a range of spectrometry and spectroscopy based approaches, many of which are commercially available, as well as other methods currently under development. We discuss a future perspective of how this range of detection methods in the growing sensor portfolio, along with developments in computational and information sciences such as predictive computing and the Internet of Things, will together form systems- and technology-based approaches that significantly reduce the areas of vulnerability to food crime within food supply chains. As food fraud is a problem of systems and therefore requires systems level solutions and thinking.
Resumo:
Jordan is adopting Enterprise Resource Planning (ERP) systems in both its public and private sectors. Jordan's emerging private sector has historically close ties to the public sector; though a global market orientation requires a shift in its organisational culture. ERPs however embed business processes which do not necessarily fit with traditional cultural practices, and implementation success is not assured. This study looks at the perceptions of both public and private sector ERP implementations in Jordan and assesses these on various measures of success. There were few differences between public and private sectors, but the benefits actually realised in Jordanian ERPs fell short of claims made for the technology in other cultures.
Resumo:
We present a novel approach for preprocessing systems of polynomial equations via graph partitioning. The variable-sharing graph of a system of polynomial equations is defined. If such graph is disconnected, then the corresponding system of equations can be split into smaller ones that can be solved individually. This can provide a tremendous speed-up in computing the solution to the system, but is unlikely to occur either randomly or in applications. However, by deleting certain vertices on the graph, the variable-sharing graph could be disconnected in a balanced fashion, and in turn the system of polynomial equations would be separated into smaller systems of near-equal sizes. In graph theory terms, this process is equivalent to finding balanced vertex partitions with minimum-weight vertex separators. The techniques of finding these vertex partitions are discussed, and experiments are performed to evaluate its practicality for general graphs and systems of polynomial equations. Applications of this approach in algebraic cryptanalysis on symmetric ciphers are presented: For the QUAD family of stream ciphers, we show how a malicious party can manufacture conforming systems that can be easily broken. For the stream ciphers Bivium and Trivium, we nachieve significant speedups in algebraic attacks against them, mainly in a partial key guess scenario. In each of these cases, the systems of polynomial equations involved are well-suited to our graph partitioning method. These results may open a new avenue for evaluating the security of symmetric ciphers against algebraic attacks.
Resumo:
Office building retrofit projects are increasingly more intensified as existing buildings are aging. At the same time, building owners and occupants are looking for environmentally sustainable products. These retrofit projects usually take place in center business district (CBDs) with on-site waste becoming one of the critical issues. Small and Medium Enterprises (SMEs) carry out most of the work in retrofit projects as subcontractors. Despite their large involvement, they often do not have adequate resources to deal with the specific technical challenges and project risks related to waste. Few research has been done on their performance of waste management operations. This paper identifies characteristics of on-site waste in office building retrofit projects. It examines the specific requirements for contractors to manage waste in the projects before exploring the existing performance of SMEs. By comparing requirements for SMEs and their potential areas for improvement, a framework is established for performance promotion of SMEs in on-site waste management of office building retrofit projects. The paper will raise the consciousness and commitment of SMEs as sub-contractors to waste management. It also explores ways of supporting SMEs for experience accumulation, performance promotion and project culture establishment towards effective and efficient on-site waste management in the growing sector of office building retrofit and upgrade.
Resumo:
The implementation of the National Professional Standards for Teachers (Australian Institute for Teaching and School Leadership (AITSL), 2011) will require all teachers to undertake 30 hours per year of professional development (PD) to maintain thei registration. However, defining what constitutes effective PD s complex. This article discusses an approach used by Narangba Valley State High School (SHS) in Queensland which involves effective on-site PD, resulting in improved student outcomes. In addition to the school-administered growth and learning (GAL) plans for each teacher, the school worked collaboratively with an external person (university lecturer) and implemented an effective, sustainable, whole-school approach to PD which was ongoing, on time, on task, on the mark, and on-the-spot (Jetnikoff & Smeed, 2012). The article unpacks an interview with Ross Mackay, the Narangba Valley SHS executive-principal and one of the authors of this paper, and provides practical advice for other school leaders wishing to implement a similar approach to PD.
Resumo:
This paper presents research findings and design strategies that illustrate how digital technology can be applied as a tool for hybrid placemaking in ways that would not be possible in purely digital or physical space. Digital technology has revolutionised the way people learn and gather new information. This trend has challenged the role of the library as a physical place, as well as the interplay of digital and physical aspects of the library. The paper provides an overview of how the penetration of digital technology into everyday life has affected the library as a place, both as designed by place makers, and, as perceived by library users. It then identifies a gap in current library research about the use of digital technology as a tool for placemaking, and reports results from a study of Gelatine – a custom built user check-in system that displays real-time user information on a set of public screens. Gelatine and its evaluation at The Edge, at State Library of Queensland illustrates how combining affordances of social, spatial and digital space can improve the connected learning experience among on-site visitors. Future design strategies involving gamifying the user experience in libraries are described based on Gelatine’s infrastructure. The presented design ideas and concepts are relevant for managers and designers of libraries as well as other informal, social learning environments.
Resumo:
Objectives To determine: (1) the accuracy of cytology scientists at assessing specimen adequacy by rapid on-site evaluation (ROSE) at fine needle aspiration (FNA) cytology collections; and (2) whether thyroid FNA with ROSE has lower inadequacy rates than non-attended FNAs. Methods The ROSE of adequacy for 3032 specimens from 17 anatomical sites collected over a 20-month period was compared with the final report assessment of adequacy. ROSE was performed by 19 cytology scientists. The report profile for 1545 thyroid nodules with ROSE was compared with that for 1536 consecutive non-ROSE thyroid FNAs reported by the same cytopathologists during the study period. Results ROSE was adequate in 75% (2276/3032), inadequate in 12% (366/3032) and in 13% (390/3032) no opinion was rendered. Of the 2276 cases assessed as adequate by ROSE, 2268 (99.6%) were finally reported as adequate for assessment; eight specimens had adequacy downgraded on the final report. Fifty eight per cent of cases with a ROSE assessment of inadequate were reported as adequate (212/366), whereas 93% (363/390) with no opinion rendered were reported as adequate. The overall final report adequacy rate for the 3032 specimens was 94% (2843/3032). Confirmation of a ROSE of adequacy at reporting was uniformly high amongst the 19 scientists, ranging from 98% to 100%. The inadequacy rate for thyroid FNAs with ROSE (6%) was significantly (P < 0.0001) lower than for non-ROSE thyroid FNAs (17%). A significantly (P = 0.02) higher proportion of adequate ROSE thyroid specimens was reported with abnormalities, compared with non-ROSE thyroid collections. Conclusions Cytology scientists are highly accurate at determining specimen adequacy at ROSE for a wide range of body sites. ROSE of thyroid FNAs can significantly reduce inadequate reports.
Resumo:
Power system disturbances are often caused by faults on transmission lines. When faults occur in a power system, the protective relays detect the fault and initiate tripping of appropriate circuit breakers, which isolate the affected part from the rest of the power system. Generally Extra High Voltage (EHV) transmission substations in power systems are connected with multiple transmission lines to neighboring substations. In some cases mal-operation of relays can happen under varying operating conditions, because of inappropriate coordination of relay settings. Due to these actions the power system margins for contingencies are decreasing. Hence, power system protective relaying reliability becomes increasingly important. In this paper an approach is presented using Support Vector Machine (SVM) as an intelligent tool for identifying the faulted line that is emanating from a substation and finding the distance from the substation. Results on 24-bus equivalent EHV system, part of Indian southern grid, are presented for illustration purpose. This approach is particularly important to avoid mal-operation of relays following a disturbance in the neighboring line connected to the same substation and assuring secure operation of the power systems.
Resumo:
GPUs have been used for parallel execution of DOALL loops. However, loops with indirect array references can potentially cause cross iteration dependences which are hard to detect using existing compilation techniques. Applications with such loops cannot easily use the GPU and hence do not benefit from the tremendous compute capabilities of GPUs. In this paper, we present an algorithm to compute at runtime the cross iteration dependences in such loops. The algorithm uses both the CPU and the GPU to compute the dependences. Specifically, it effectively uses the compute capabilities of the GPU to quickly collect the memory accesses performed by the iterations by executing the slice functions generated for the indirect array accesses. Using the dependence information, the loop iterations are levelized such that each level contains independent iterations which can be executed in parallel. Another interesting aspect of the proposed solution is that it pipelines the dependence computation of the future level with the actual computation of the current level to effectively utilize the resources available in the GPU. We use NVIDIA Tesla C2070 to evaluate our implementation using benchmarks from Polybench suite and some synthetic benchmarks. Our experiments show that the proposed technique can achieve an average speedup of 6.4x on loops with a reasonable number of cross iteration dependences.
Resumo:
A block-structured adaptive mesh refinement (AMR) technique has been used to obtain numerical solutions for many scientific applications. Some block-structured AMR approaches have focused on forming patches of non-uniform sizes where the size of a patch can be tuned to the geometry of a region of interest. In this paper, we develop strategies for adaptive execution of block-structured AMR applications on GPUs, for hyperbolic directionally split solvers. While effective hybrid execution strategies exist for applications with uniform patches, our work considers efficient execution of non-uniform patches with different workloads. Our techniques include bin-packing work units to load balance GPU computations, adaptive asynchronism between CPU and GPU executions using a knapsack formulation, and scheduling communications for multi-GPU executions. Our experiments with synthetic and real data, for single-GPU and multi-GPU executions, on Tesla S1070 and Fermi C2070 clusters, show that our strategies result in up to a 3.23 speedup in performance over existing strategies.