198 resultados para PARALLEL COMPUTING
Resumo:
The research reported here addresses the problem of detecting and tracking independently moving objects from a moving observer in real-time, using corners as object tokens. Corners are detected using the Harris corner detector, and local image-plane constraints are employed to solve the correspondence problem. The approach relaxes the restrictive static-world assumption conventionally made, and is therefore capable of tracking independently moving and deformable objects. Tracking is performed without the use of any 3-dimensional motion model. The technique is novel in that, unlike traditional feature-tracking algorithms where feature detection and tracking is carried out over the entire image-plane, here it is restricted to those areas most likely to contain-meaningful image structure. Two distinct types of instantiation regions are identified, these being the “focus-of-expansion” region and “border” regions of the image-plane. The size and location of these regions are defined from a combination of odometry information and a limited knowledge of the operating scenario. The algorithms developed have been tested on real image sequences taken from typical driving scenarios. Implementation of the algorithm using T800 Transputers has shown that near-linear speedups are achievable, and that real-time operation is possible (half-video rate has been achieved using 30 processing elements).
Resumo:
Fast restoration of critical loads and non-black-start generators can significantly reduce the economic losses caused by power system blackouts. In a parallel power system restoration scenario, the sectionalization of restoration subsystems plays a very important role in determining the pickup of critical loads before synchronization. Most existing research mainly focuses on the startup of non-black-start generators. The restoration of critical loads, especially the loads with cold load characteristics, has not yet been addressed in optimizing the subsystem divisions. As a result, sectionalized restoration subsystems cannot achieve the best coordination between the pickup of loads and the ramping of generators. In order to generate sectionalizing strategies considering the pickup of critical loads in parallel power system restoration scenarios, an optimization model considering power system constraints, the characteristics of the cold load pickup and the features of generator startup is proposed in this paper. A bi-level programming approach is employed to solve the proposed sectionalizing model. In the upper level the optimal sectionalizing problem for the restoration subsystems is addressed, while in the lower level the objective is to minimize the outage durations of critical loads. The proposed sectionalizing model has been validated by the New-England 39-bus system and the IEEE 118-bus system. Further comparisons with some existing methods are carried out as well.
Resumo:
Cloud computing has significantly impacted a broad range of industries, but these technologies and services have been absorbed throughout the marketplace unevenly. Some industries have moved aggressively towards cloud computing, while others have moved much more slowly. For the most part, the energy sector has approached cloud computing in a measured and cautious way, with progress often in the form of private cloud solutions rather than public ones, or hybridized information technology systems that combine cloud and existing non-cloud architectures. By moving towards cloud computing in a very slow and tentative way, however, the energy industry may prevent itself from reaping the full benefit that a more complete migration to the public cloud has brought about in several other industries. This short communication is accordingly intended to offer a high-level overview of cloud computing, and to put forward the argument that the energy sector should make a more complete migration to the public cloud in order to unlock the major system-wide efficiencies that cloud computing can provide. Also, assets within the energy sector should be designed with as much modularity and flexibility as possible so that they are not locked out of cloud-friendly options in the future.
Resumo:
The efficient computation of matrix function vector products has become an important area of research in recent times, driven in particular by two important applications: the numerical solution of fractional partial differential equations and the integration of large systems of ordinary differential equations. In this work we consider a problem that combines these two applications, in the form of a numerical solution algorithm for fractional reaction diffusion equations that after spatial discretisation, is advanced in time using the exponential Euler method. We focus on the efficient implementation of the algorithm on Graphics Processing Units (GPU), as we wish to make use of the increased computational power available with this hardware. We compute the matrix function vector products using the contour integration method in [N. Hale, N. Higham, and L. Trefethen. Computing Aα, log(A), and related matrix functions by contour integrals. SIAM J. Numer. Anal., 46(5):2505–2523, 2008]. Multiple levels of preconditioning are applied to reduce the GPU memory footprint and to further accelerate convergence. We also derive an error bound for the convergence of the contour integral method that allows us to pre-determine the appropriate number of quadrature points. Results are presented that demonstrate the effectiveness of the method for large two-dimensional problems, showing a speedup of more than an order of magnitude compared to a CPU-only implementation.
Resumo:
Asset management has broadened from a focus on maintenance management to whole of life cycle asset management requiring a suite of new competencies from asset procurement to management and disposal. Well developed skills and competencies as well as practical experience are a prerequisite to maintain capability, to manage demand as well to plan and set priorities and ensure on-going asset sustainability. This paper has as its focus to establish critical understandings of data, information and knowledge for asset management along with the way in which benchmarking these attributes through computer-aided design may aid a strategic approach to asset management. The paper provides suggestions to improve sharing, integration and creation of asset-related knowledge through the application of codification and personalization approaches.
Resumo:
Fair Use Week has celebrated the evolution and development of the defence of fair use under copyright law in the United States. As Krista Cox noted, ‘As a flexible doctrine, fair use can adapt to evolving technologies and new situations that may arise, and its long history demonstrates its importance in promoting access to information, future innovation, and creativity.’ While the defence of fair use has flourished in the United States, the adoption of the defence of fair use in other jurisdictions has often been stymied. Professor Peter Jaszi has reflected: ‘We can only wonder (with some bemusement) why some of our most important foreign competitors, like the European Union, haven’t figured out that fair use is, to a great extent, the “secret sauce” of U.S. cultural competitiveness.’ Jurisdictions such as Australia have been at a dismal disadvantage, because they lack the freedoms and flexibilities of the defence of fair use.
Resumo:
The requirement of distributed computing of all-to-all comparison (ATAC) problems in heterogeneous systems is increasingly important in various domains. Though Hadoop-based solutions are widely used, they are inefficient for the ATAC pattern, which is fundamentally different from the MapReduce pattern for which Hadoop is designed. They exhibit poor data locality and unbalanced allocation of comparison tasks, particularly in heterogeneous systems. The results in massive data movement at runtime and ineffective utilization of computing resources, affecting the overall computing performance significantly. To address these problems, a scalable and efficient data and task distribution strategy is presented in this paper for processing large-scale ATAC problems in heterogeneous systems. It not only saves storage space but also achieves load balancing and good data locality for all comparison tasks. Experiments of bioinformatics examples show that about 89\% of the ideal performance capacity of the multiple machines have be achieved through using the approach presented in this paper.
Resumo:
Purpose – While many studies have predominantly looked at the benefits and risks of cloud computing, little is known whether and to what extent institutional forces play a role in cloud computing adoption. The purpose of this paper is to explore the role of institutional factors in top management team’s (TMT’s) decision to adopt cloud computing services. Design/methodology/approach – A model is developed and tested with data from an Australian survey using the partial least squares modeling technique. Findings – The results suggest that mimetic and coercive pressures influence TMT’s beliefs in the benefits of cloud computing. The results also show that TMT’s beliefs drive TMT’s participation, which in turn affects the intention to increase the adoption of cloud computing solutions. Research limitations/implications – Future studies could incorporate the influences of local actors who might also press for innovation. Practical implications – Given the influence of institutional forces and the plethora of cloud-based solutions on the market, it is recommended that TMTs exercise a high degree of caution when deciding for the types of applications to be outsourced as organizational requirements in terms of performance and security will differ. Originality/value – The paper contributes to the growing empirical literature on cloud computing adoption and offers the institutional framework as an alternative lens with which to interpret cloud-based information technology outsourcing.
Resumo:
Adopting a multi-theoretical approach, I examine external auditors’ perceptions of the reasons why organizations do or do not adopt cloud computing. I interview forensic accountants and IT experts about the adoption, acceptance, institutional motives, and risks of cloud computing. Although the medium to large accounting firms where the external auditors worked almost exclusively used private clouds, both private and public cloud services were gaining a foothold among many of their clients. Despite the advantages of cloud computing, data confidentiality and the involvement of foreign jurisdictions remain a concern, particularly if the data are moved outside Australia. Additionally, some organizations seem to understand neither the technology itself nor their own requirements, which may lead to poorly negotiated contracts and service agreements. To minimize the risks associated with cloud computing, many organizations turn to hybrid solutions or private clouds that include national or dedicated data centers. To the best of my knowledge, this is the first empirical study that reports on cloud computing adoption from the perspectives of external auditors.
Resumo:
This research studied distributed computing of all-to-all comparison problems with big data sets. The thesis formalised the problem, and developed a high-performance and scalable computing framework with a programming model, data distribution strategies and task scheduling policies to solve the problem. The study considered storage usage, data locality and load balancing for performance improvement in solving the problem. The research outcomes can be applied in bioinformatics, biometrics and data mining and other domains in which all-to-all comparisons are a typical computing pattern.
Resumo:
Quality of Service (QoS) is a new issue in cloud-based MapReduce, which is a popular computation model for parallel and distributed processing of big data. QoS guarantee is challenging in a dynamical computation environment due to the fact that a fixed resource allocation may become under-provisioning, which leads to QoS violation, or over-provisioning, which increases unnecessary resource cost. This requires runtime resource scaling to adapt environmental changes for QoS guarantee. Aiming to guarantee the QoS, which is referred as to hard deadline in this work, this paper develops a theory to determine how and when resource is scaled up/down for cloud-based MapReduce. The theory employs a nonlinear transformation to define the problem in a reverse resource space, simplifying the theoretical analysis significantly. Then, theoretical results are presented in three theorems on sufficient conditions for guaranteeing the QoS of cloud-based MapReduce. The superiority and applications of the theory are demonstrated through case studies.
Resumo:
There is an increase in the uptake of cloud computing services (CCS). CCS is adopted in the form of a utility, and it incorporates business risks of the service providers and intermediaries. Thus, the adoption of CCS will change the risk profile of an organization. In this situation, organisations need to develop competencies by reconsidering their IT governance structures to achieve a desired level of IT-business alignment and maintain their risk appetite to source business value from CCS. We use the resource-based theories to suggest that collaborative board oversight of CCS, competencies relating to CCS information and financial management, and a CCS-related continuous audit program can contribute to business process performance improvements and overall firm performance. Using survey data, we find evidence of a positive association between these IT governance considerations and business process performance. We also find evidence of positive association between business process performance improvements and overall firm performance. The results suggest that the suggested considerations on IT governance structures can contribute to CCS-related IT-business alignment and lead to anticipated business value from CCS. This study provides guidance to organizations on competencies required to secure business value from CCS.
Resumo:
The concept of cloud computing services (CCS) is appealing to small and medium enterprises (SMEs). However, while there is a significant push by various authorities on SMEs to adopt the CCS, knowledge of the key considerations to adopt the CCS is very limited. We use the technology-organization-environment (TOE) framework to suggest that a strategic and incremental intent, understanding the organizational structure and culture, understanding the external factors, and consideration of the human resource capacity can contribute to sustainable business value from CCS. Using survey data, we find evidence of a positive association between these considerations and the CCS-related business objectives. We also find evidence of positive association between the CCS-related business objectives and CCS-related financial objectives. The results suggest that the proposed considerations can ensure sustainable business value from the CCS. This study provides guidance to SMEs on a path to adopting the CCS with the intention of a long-term commitment and achieving sustainable business value from these services.
Resumo:
There is an increase in the uptake of cloud computing services (CCS). CCS is adopted in the form of a utility, and it incorporates business risks of the service providers and intermediaries. Thus, the adoption of CCS will change the risk profile of an organization. In this situation, organizations need to develop competencies by reconsidering their IT governance structures to achieve a desired level of IT-business alignment and maintain their risk appetite to source business value from CCS. We use the resource-based theories to suggest that collaborative board oversight of CCS, competencies relating to CCS information and financial management, and a CCS-related continuous audit program can contribute to business process performance improvements and overall firm performance. Using survey data, we find evidence of a positive association between these IT governance considerations and business process performance. We also find evidence of positive association between business process performance improvements and overall firm performance. The results suggest that the suggested considerations on IT governance structures can contribute to CCS-related IT-business alignment and lead to anticipated business value from CCS. This study provides guidance to organizations on competencies required to secure business value from CCS.
Resumo:
Background Ankylosing spondylitis (AS) is an immune-mediated arthritis particularly targeting the spine and pelvis and is characterised by inflammation, osteoproliferation and frequently ankylosis. Current treatments that predominately target inflammatory pathways have disappointing efficacy in slowing disease progression. Thus, a better understanding of the causal association and pathological progression from inflammation to bone formation, particularly whether inflammation directly initiates osteoproliferation, is required. Methods The proteoglycan-induced spondylitis (PGISp) mouse model of AS was used to histopathologically map the progressive axial disease events, assess molecular changes during disease progression and define disease progression using unbiased clustering of semi-quantitative histology. PGISp mice were followed over a 24-week time course. Spinal disease was assessed using a novel semi-quantitative histological scoring system that independently evaluated the breadth of pathological features associated with PGISp axial disease, including inflammation, joint destruction and excessive tissue formation (osteoproliferation). Matrix components were identified using immunohistochemistry. Results Disease initiated with inflammation at the periphery of the intervertebral disc (IVD) adjacent to the longitudinal ligament, reminiscent of enthesitis, and was associated with upregulated tumor necrosis factor and metalloproteinases. After a lag phase, established inflammation was temporospatially associated with destruction of IVDs, cartilage and bone. At later time points, advanced disease was characterised by substantially reduced inflammation, excessive tissue formation and ectopic chondrocyte expansion. These distinct features differentiated affected mice into early, intermediate and advanced disease stages. Excessive tissue formation was observed in vertebral joints only if the IVD was destroyed as a consequence of the early inflammation. Ectopic excessive tissue was predominantly chondroidal with chondrocyte-like cells embedded within collagen type II- and X-rich matrix. This corresponded with upregulation of mRNA for cartilage markers Col2a1, sox9 and Comp. Osteophytes, though infrequent, were more prevalent in later disease. Conclusions The inflammation-driven IVD destruction was shown to be a prerequisite for axial disease progression to osteoproliferation in the PGISp mouse. Osteoproliferation led to vertebral body deformity and fusion but was never seen concurrent with persistent inflammation, suggesting a sequential process. The findings support that early intervention with anti-inflammatory therapies will be needed to limit destructive processes and consequently prevent progression of AS.