801 resultados para Response time (computer systems)
Resumo:
The construction industry is categorised as being an information-intensive industry and described as one of the most important industries in any developed country, facing a period of rapid and unparalleled change (Industry Science Resources 1999) (Love P.E.D., Tucker S.N. et al. 1996). Project communications are becoming increasingly complex, with a growing need and fundamental drive to collaborate electronically at project level and beyond (Olesen K. and Myers M.D. 1999; Thorpe T. and Mead S. 2001; CITE 2003). Yet, the industry is also identified as having a considerable lack of knowledge and awareness about innovative information and communication technology (ICT) and web-based communication processes, systems and solutions which may prove beneficial in the procurement, delivery and life cycle of projects (NSW Government 1998; Kajewski S. and Weippert A. 2000). The Internet has debatably revolutionised the way in which information is stored, exchanged and viewed, opening new avenues for business, which only a decade ago were deemed almost inconceivable (DCITA 1998; IIB 2002). In an attempt to put these ‘new avenues of business’ into perspective, this report provides an overall ‘snapshot’ of current public and private construction industry sector opportunities and practices in the implementation and application of web-based ICT tools, systems and processes (e-Uptake). Research found that even with a reserved uptake, the construction industry and its participating organisations are making concerted efforts (fortunately with positive results) in taking up innovative forms of doing business via the internet, including e-Tendering (making it possible to manage the entire tender letting process electronically and online) (Anumba C.J. and Ruikar K. 2002; ITCBP 2003). Furthermore, Government (often a key client within the construction industry),and with its increased tendency to transact its business electronically, undoubtedly has an effect on how various private industry consultants, contractors, suppliers, etc. do business (Murray M. 2003) – by offering a wide range of (current and anticipated) e-facilities / services, including e-Tendering (Ecommerce 2002). Overall, doing business electronically is found to have a profound impact on the way today’s construction businesses operate - streamlining existing processes, with the growth in innovative tools, such as e-Tender, offering the construction industry new responsibilities and opportunities for all parties involved (ITCBP 2003). It is therefore important that these opportunities should be accessible to as many construction industry businesses as possible (The Construction Confederation 2001). Historically, there is a considerable exchange of information between various parties during a tendering process, where accuracy and efficiency of documentation is critical. Traditionally this process is either paper-based (involving large volumes of supporting tender documentation), or via a number of stand-alone, non-compatible computer systems, usually costly to both the client and contractor. As such, having a standard electronic exchange format that allows all parties involved in an electronic tender process to access one system only via the Internet, saves both time and money, eliminates transcription errors and increases speed of bid analysis (The Construction Confederation 2001). Supporting this research project’s aims and objectives, researchers set to determine today’s construction industry ‘current state-of-play’ in relation to e-Tendering opportunities. The report also provides brief introductions to several Australian and International e-Tender systems identified during this investigation. e-Tendering, in its simplest form, is described as the electronic publishing, communicating, accessing, receiving and submitting of all tender related information and documentation via the internet, thereby replacing the traditional paper-based tender processes, and achieving a more efficient and effective business process for all parties involved (NT Governement 2000; NT Government 2000; NSW Department of Commerce 2003; NSW Government 2003). Although most of the e-Tender websites investigated at the time, maintain their tendering processes and capabilities are ‘electronic’, research shows these ‘eTendering’ systems vary from being reasonably advanced to more ‘basic’ electronic tender notification and archiving services for various industry sectors. Research also indicates an e-Tender system should have a number of basic features and capabilities, including: • All tender documentation to be distributed via a secure web-based tender system – thereby avoiding the need for collating paperwork and couriers. • The client/purchaser should be able to upload a notice and/or invitation to tender onto the system. • Notification is sent out electronically (usually via email) for suppliers to download the information and return their responses electronically (online). • During the tender period, updates and queries are exchanged through the same e-Tender system. • The client/purchaser should only be able to access the tenders after the deadline has passed. • All tender related information is held in a central database, which should be easily searchable and fully audited, with all activities recorded. • It is essential that tender documents are not read or submitted by unauthorised parties. • Users of the e-Tender system are to be properly identified and registered via controlled access. In simple terms, security has to be as good as if not better than a manual tender process. Data is to be encrypted and users authenticated by means such as digital signatures, electronic certificates or smartcards. • All parties must be assured that no 'undetected' alterations can be made to any tender. • The tenderer should be able to amend the bid right up to the deadline – whilst the client/purchaser cannot obtain access until the submission deadline has passed. • The e-Tender system may also include features such as a database of service providers with spreadsheet-based pricing schedules, which can make it easier for a potential tenderer to electronically prepare and analyse a tender. Research indicates the efficiency of an e-Tender process is well supported internationally, with a significant number, yet similar, e-Tender benefits identified during this investigation. Both construction industry and Government participants generally agree that the implementation of an automated e-Tendering process or system enhances the overall quality, timeliness and cost-effectiveness of a tender process, and provides a more streamlined method of receiving, managing, and submitting tender documents than the traditional paper-based process. On the other hand, whilst there are undoubtedly many more barriers challenging the successful implementation and adoption of an e-Tendering system or process, researchers have also identified a range of challenges and perceptions that seem to hinder the uptake of this innovative approach to tendering electronically. A central concern seems to be that of security - when industry organisations have to use the Internet for electronic information transfer. As a result, when it comes to e-Tendering, industry participants insist these innovative tendering systems are developed to ensure the utmost security and integrity. Finally, if Australian organisations continue to explore the competitive ‘dynamics’ of the construction industry, without realising the current and future, trends and benefits of adopting innovative processes, such as e-Tendering, it will limit their globalising opportunities to expand into overseas markets and allow the continuation of international firms successfully entering local markets. As such, researchers believe increased knowledge, awareness and successful implementation of innovative systems and processes raises great expectations regarding their contribution towards ‘stimulating’ the globalisation of electronic procurement activities, and improving overall business and project performances throughout the construction industry sectors and overall marketplace (NSW Government 2002; Harty C. 2003; Murray M. 2003; Pietroforte R. 2003). Achieving the successful integration of an innovative e-Tender solution with an existing / traditional process can be a complex, and if not done correctly, could lead to failure (Bourn J. 2002).
Resumo:
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and TCAS). This paper describes the development and evaluation of a real-time vision-based collision detection system suitable for fixed-wing aerial robotics. Using two fixed-wing UAVs to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400m to about 900m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning of between 8-10 seconds ahead of impact, which approaches the 12.5 second response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units found on commercially-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1024 by 768 pixel image frames at a rate of approximately 30Hz. Flight trials using manned Cessna aircraft where all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms.
Resumo:
This paper presents a case study of a design for a complete microair vehicle thruster. Fixed-pitch small-scale rotors, brushless motors, lithium-polymer cells, and embedded control are combined to produce a mechanically simple, high-performance thruster with potentially high reliability. The custom rotor design requires a balance between manufacturing simplicity and rigidity of a blade versus its aerodynamic performance. An iterative steady-state aeroelastic simulator is used for holistic blade design. The aerodynamic load disturbances of the rotor-motor system in normal conditions are experimentally characterized. The motors require fast dynamic response for authoritative vehicle flight control. We detail a dynamic compensator that achieves satisfactory closed-loop response time. The experimental rotor-motor plant displayed satisfactory thrust performance and dynamic response.
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.
Resumo:
In the medical and healthcare arena, patients‟ data is not just their own personal history but also a valuable large dataset for finding solutions for diseases. While electronic medical records are becoming popular and are used in healthcare work places like hospitals, as well as insurance companies, and by major stakeholders such as physicians and their patients, the accessibility of such information should be dealt with in a way that preserves privacy and security. Thus, finding the best way to keep the data secure has become an important issue in the area of database security. Sensitive medical data should be encrypted in databases. There are many encryption/ decryption techniques and algorithms with regard to preserving privacy and security. Currently their performance is an important factor while the medical data is being managed in databases. Another important factor is that the stakeholders should decide more cost-effective ways to reduce the total cost of ownership. As an alternative, DAS (Data as Service) is a popular outsourcing model to satisfy the cost-effectiveness but it takes a consideration that the encryption/ decryption modules needs to be handled by trustworthy stakeholders. This research project is focusing on the query response times in a DAS model (AES-DAS) and analyses the comparison between the outsourcing model and the in-house model which incorporates Microsoft built-in encryption scheme in a SQL Server. This research project includes building a prototype of medical database schemas. There are 2 types of simulations to carry out the project. The first stage includes 6 databases in order to carry out simulations to measure the performance between plain-text, Microsoft built-in encryption and AES-DAS (Data as Service). Particularly, the AES-DAS incorporates implementations of symmetric key encryption such as AES (Advanced Encryption Standard) and a Bucket indexing processor using Bloom filter. The results are categorised such as character type, numeric type, range queries, range queries using Bucket Index and aggregate queries. The second stage takes the scalability test from 5K to 2560K records. The main result of these simulations is that particularly as an outsourcing model, AES-DAS using the Bucket index shows around 3.32 times faster than a normal AES-DAS under the 70 partitions and 10K record-sized databases. Retrieving Numeric typed data takes shorter time than Character typed data in AES-DAS. The aggregation query response time in AES-DAS is not as consistent as that in MS built-in encryption scheme. The scalability test shows that the DBMS reaches in a certain threshold; the query response time becomes rapidly slower. However, there is more to investigate in order to bring about other outcomes and to construct a secured EMR (Electronic Medical Record) more efficiently from these simulations.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
Several approaches have been introduced in literature for active noise control (ANC) systems. Since FxLMS algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of FxLMS algorithm. In many ANC applications an online secondary path modelling method using a white noise as a training signal is required to ensure convergence of the system. This paper also proposes a new approach for online secondary path modelling in feedfoward ANC systems. The proposed algorithm stops injection of the white noise at the optimum point and reactivate the injection during the operation, if needed, to maintain performance of the system. Benefiting new version of FxLMS algorithm and not continually injection of white noise makes the system more desirable and improves the noise attenuation performance. Comparative simulation results indicate effectiveness of the proposed approach.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Big Data is a rising IT trend similar to cloud computing, social networking or ubiquitous computing. Big Data can offer beneficial scenarios in the e-health arena. However, one of the scenarios can be that Big Data needs to be kept secured for a long period of time in order to gain its benefits such as finding cures for infectious diseases and protecting patient privacy. From this connection, it is beneficial to analyse Big Data to make meaningful information while the data is stored securely. Therefore, the analysis of various database encryption techniques is essential. In this study, we simulated 3 types of technical environments, namely, Plain-text, Microsoft Built-in Encryption, and custom Advanced Encryption Standard, using Bucket Index in Data-as-a-Service. The results showed that custom AES-DaaS has a faster range query response time than MS built-in encryption. Furthermore, while carrying out the scalability test, we acknowledged that there are performance thresholds depending on physical IT resources. Therefore, for the purpose of efficient Big Data management in eHealth it is noteworthy to examine their scalability limits as well even if it is under a cloud computing environment. In addition, when designing an e-health database, both patient privacy and system performance needs to be dealt as top priorities.
Resumo:
Dwell time at the busway station has a significant effect on bus capacity and delay. Dwell time has conventionally been estimated using models developed on the basis of field survey data. However field survey is resource and cost intensive, so dwell time estimation based on limited observations can be somewhat inaccurate. Most public transport systems are now equipped with Automatic Passenger Count (APC) and/or Automatic Fare Collection (AFC) systems. AFC in particular reduces on-board ticketing time, driver’s work load and ultimately reduces bus dwell time. AFC systems can record all passenger transactions providing transit agencies with access to vast quantities of data. AFC data provides transaction timestamps, however this information differs from dwell time because passengers may tag on or tag off at times other than when doors open and close. This research effort contended that models could be developed to reliably estimate dwell time distributions when measured distributions of transaction times are known. Development of the models required calibration and validation using field survey data of actual dwell times, and an appreciation of another component of transaction time being bus time in queue. This research develops models for a peak period and off peak period at a busway station on the South East Busway (SEB) in Brisbane, Australia.
Resumo:
Enterprise resource planning (ERP) systems are rapidly being combined with “big data” analytics processes and publicly available “open data sets”, which are usually outside the arena of the enterprise, to expand activity through better service to current clients as well as identifying new opportunities. Moreover, these activities are now largely based around relevant software systems hosted in a “cloud computing” environment. However, the over 50- year old phrase related to mistrust in computer systems, namely “garbage in, garbage out” or “GIGO”, is used to describe problems of unqualified and unquestioning dependency on information systems. However, a more relevant GIGO interpretation arose sometime later, namely “garbage in, gospel out” signifying that with large scale information systems based around ERP and open datasets as well as “big data” analytics, particularly in a cloud environment, the ability to verify the authenticity and integrity of the data sets used may be almost impossible. In turn, this may easily result in decision making based upon questionable results which are unverifiable. Illicit “impersonation” of and modifications to legitimate data sets may become a reality while at the same time the ability to audit any derived results of analysis may be an important requirement, particularly in the public sector. The pressing need for enhancement of identity, reliability, authenticity and audit services, including naming and addressing services, in this emerging environment is discussed in this paper. Some current and appropriate technologies currently being offered are also examined. However, severe limitations in addressing the problems identified are found and the paper proposes further necessary research work for the area. (Note: This paper is based on an earlier unpublished paper/presentation “Identity, Addressing, Authenticity and Audit Requirements for Trust in ERP, Analytics and Big/Open Data in a ‘Cloud’ Computing Environment: A Review and Proposal” presented to the Department of Accounting and IT, College of Management, National Chung Chen University, 20 November 2013.)
Resumo:
Early works on Private Information Retrieval (PIR) focused on minimizing the necessary communication overhead. They seemed to achieve this goal but at the expense of query response time. To mitigate this weakness, protocols with secure coprocessors were introduced. They achieve optimal communication complexity and better online processing complexity. Unfortunately, all secure coprocessor-based PIR protocols require heavy periodical preprocessing. In this paper, we propose a new protocol, which is free from the periodical preprocessing while offering the optimal communication complexity and almost optimal online processing complexity. The proposed protocol is proven to be secure.
Resumo:
Recently a new human authentication scheme called PAS (predicate-based authentication service) was proposed, which does not require the assistance of any supplementary device. The main security claim of PAS is to resist passive adversaries who can observe the whole authentication session between the human user and the remote server. In this paper we show that PAS is insecure against both brute force attack and a probabilistic attack. In particular, we show that its security against brute force attack was strongly overestimated. Furthermore, we introduce a probabilistic attack, which can break part of the password even with a very small number of observed authentication sessions. Although the proposed attack cannot completely break the password, it can downgrade the PAS system to a much weaker system similar to common OTP (one-time password) systems.
Resumo:
Unified Communication (UC) is the integration of two or more real time communication systems into one platform. Integrating core communication systems into one overall enterprise level system delivers more than just cost saving. These real-time interactive communication services and applications over Internet Protocol (IP) have become critical in boosting employee accessibility and efficiency, improving customer support and fostering business agility. However, some small and medium-sized businesses (SMBs) are far from implementing this solution due to the high cost of initial deployment and ongoing support. In this paper, we will discuss and demonstrate an open source UC solution, viz. “Asterisk” for use by SMBs, and report on some performance tests using SIPp. The contribution from this research is the provision of technical advice to SMBs in deploying UC, which is manageable in terms of cost, ease of deployment and support.
Resumo:
Substation Automation Systems have undergone many transformational changes triggered by improvements in technologies. Prior to the digital era, it made sense to confirm that the physical wiring matched the schematic design by meticulous and laborious point to point testing. In this way, human errors in either the design or the construction could be identified and fixed prior to entry into service. However, even though modern secondary systems today are largely computerised, we are still undertaking commissioning testing using the same philosophy as if each signal were hard wired. This is slow and tedious and doesn’t do justice to modern computer systems and software automation. One of the major architectural advantages of the IEC 61850 standard is that it “abstracts” the definition of data and services independently of any protocol allowing the mapping of them to any protocol that can meet the modelling and performance requirements. On this basis, any substation element can be defined using these common building blocks and are made available at the design, configuration and operational stages of the system. The primary advantage of accessing data using this methodology rather than the traditional position method (such as DNP 3.0) is that generic tools can be created to manipulate data. Self-describing data contains the information that these tools need to manipulate different data types correctly. More importantly, self-describing data makes the interface between programs robust and flexible. This paper proposes that the improved data definitions and methods for dealing with this data within a tightly bound and compliant IEC 61850 Substation Automation System could completely revolutionise the need to test systems when compared to traditional point to point methods. Using the outcomes of an undergraduate thesis project, we can demonstrate with some certainty that it is possible to automatically test the configuration of a protection relay by comparing the IEC 61850 configuration extracted from the relay against its SCL file for multiple relay vendors. The software tool provides a quick and automatic check that the data sets on a particular relay are correct according to its CID file, thus ensuring that no unexpected modifications are made at any stage of the commissioning process. This tool has been implemented in a Java programming environment using an open source IEC 61850 library to facilitate the server-client association with the relay.