290 resultados para optimal trigger speed
Resumo:
Two different methods to measure binocular longitudinal corneal apex movements were synchronously applied. High-speed videokeratoscopy at a sampling frequency of 15 Hz and a customdesigned ultrasound distance sensor at 100 Hz were used for the left and the right eye, respectively. Four healthy subjects participated in the study. Simultaneously, cardiac electric cycle (ECG) was registered for each subject at 100 Hz. Each measurement took 20 s. Subjects were asked to suppress blinking during the measurements. A rigid headrest and a bite-bar were used to minimize undesirable head movements. Time, frequency and time-frequency representations of the acquired signals were obtained to establish their temporal and spectral contents. Coherence analysis was used to estimate the correlation between the measured signals. The results showed close correlation between both corneal apex movements and the cardiopulmonary system. Unraveling these relationships could lead to better understanding of interactions between ocular biomechanics and vision. The advantages and disadvantages of the two methods in the context of measuring longitudinal movements of the corneal apex are outlined.
Resumo:
In this paper, the placement and sizing of Distributed Generators (DG) in distribution networks are determined optimally. The objective is to minimize the loss and to improve the reliability. The constraints are the bus voltage, feeder current and the reactive power flowing back to the source side. The placement and size of DGs are optimized using a combination of Discrete Particle Swarm Optimization (DPSO) and Genetic Algorithm (GA). This increases the diversity of the optimizing variables in DPSO not to be stuck in the local minima. To evaluate the proposed algorithm, the semi-urban 37-bus distribution system connected at bus 2 of the Roy Billinton Test System (RBTS), which is located at the secondary side of a 33/11 kV distribution substation, is used. The results finally illustrate the efficiency of the proposed method.
Resumo:
A Split System Approach (SSA) based methodology is presented to assist in making optimal Preventive Maintenance decisions for serial production lines. The methodology treats a production line as a complex series system with multiple PM actions over multiple intervals. Both risk related cost and maintenance related cost are factored into the methodology as either deterministic or random variables. This SSA based methodology enables Asset Management (AM) decisions to be optimized considering a variety of factors including failure probability, failure cost, maintenance cost, PM performance, and the type of PM strategy. The application of this new methodology and an evaluation of the effects of these factors on PM decisions are demonstrated using an example. The results of this work show that the performance of a PM strategy can be measured by its Total Expected Cost Index (TECI). The optimal PM interval is dependent on TECI, PM performance and types of PM strategies. These factors are interrelated. Generally it was found that a trade-off between reliability and the number of PM actions needs to be made so that one can minimize Total Expected Cost (TEC) for asset maintenance.
Resumo:
This paper reports on a study investigating preferred driving speeds and frequency of speeding of 320 Queensland drivers. Despite growing community concern about speeding and extensive research linking it to road trauma, speeding remains a pervasive, and arguably, socially acceptable behaviour. This presents an apparent paradox regarding the mismatch between beliefs and behaviours, and highlights the necessity to better understand the factors contributing to speeding. Utilising self-reported behaviour and attitudinal measures, results of this study support the notion of a speed paradox. Two thirds of participants agreed that exceeding the limit is not worth the risks nor is it okay to exceed the posted limit. Despite this, more than half (58.4%) of the participants reported a preference to exceed the 100km/hour speed limit, with one third preferring to do so by 10 to 20 km/hour. Further, mean preferred driving speeds on both urban and open roads suggest a perceived enforcement tolerance of 10%, suggesting that posted limits have limited direct influence on speed choice. Factors that significantly predicted the frequency of speeding included: exposure to role models who speed; favourable attitudes to speeding; experiences of punishment avoidance; and the perceived certainty of punishment for speeding. These findings have important policy implications, particularly relating to the use of enforcement tolerances.
Resumo:
Purpose: In 1970, Enright observed a distortion of perceived driving speed, induced by monocular application of a neutral density (ND) filter. If a driver looks out of the right side of a vehicle with a filter over the right eye, the driver perceives a reduction of the vehicle’s apparent velocity, while applying a ND filter over the left eye increases the vehicle’s apparent velocity. The purpose of the current study was to provide the first empirical measurements of the Enright phenomenon. Methods: Ten experienced drivers were tested and drove an automatic sedan on a closed road circuit. Filters (0.9 ND) were placed over the left, right or both eyes during a driving run, in addition to a control condition with no filters in place. Subjects were asked to look out of the right side of the car and adjust their driving speed to either 40 km/h or 60 km/h. Results: Without a filter or with both eyes filtered subjects showed good estimation of speed when asked to travel at 60 km/h but travelled a mean of 12 to 14 km/h faster than the requested 40 km/h. Subjects travelled faster than these baselines by a mean of 7 to 9 km/h (p < 0.001) with the filter over their right eye, and 3 to 5 km/h slower with the filter over their left eye (p < 0.05). Conclusions: The Enright phenomenon causes significant and measurable distortions of perceived driving speed under realworld driving conditions.
Resumo:
This research quantitatively examines the determinants of board size and the consequence it has on the performance of large companies in Australia. In line with international and the prevalent United States research the results suggest that there is no significant relationship between board size and their subsequent performance. In examining whether more complex operations require larger boards it was found that larger firms or firms with more lines of business tended to have more directors. Data analysis from the research supports the proposition that blockholders could affect management practices and that they enhances performance as measured by shareholder return.
Resumo:
Decisions made in the earliest stage of architectural design have the greatest impact on the construction, lifecycle cost and environmental footprint of buildings. Yet the building services, one of the largest contributors to cost, complexity, and environmental impact, are rarely considered as an influence on the design at this crucial stage. In order for efficient and environmentally sensitive built environment outcomes to be achieved, a closer collaboration between architects and services engineers is required at the outset of projects. However, in practice, there are a variety of obstacles impeding this transition towards an integrated design approach. This paper firstly presents a critical review of the existing barriers to multidisciplinary design. It then examines current examples of best practice in the building industry to highlight the collaborative strategies being employed and their benefits to the design process. Finally, it discusses a case study project to identify directions for further research.
Resumo:
In this paper, the optimal allocation and sizing of distributed generators (DGs) in a distribution system is studied. To achieve this goal, an optimization problem should be solved in which the main objective is to minimize the DGs cost and to maximise the reliability simultaneously. The active power balance between loads and DGs during the isolation time is used as a constraint. Another point considered in this process is the load shedding. It means that if the summation of DGs active power in a zone, isolated by the sectionalizers because of a fault, is less than the total active power of loads located in that zone, the program start shedding the loads in one-by-one using the priority rule still the active power balance is satisfied. This assumption decreases the reliability index, SAIDI, compared with the case loads in a zone are shed when total DGs power is less than the total load power. To validate the proposed method, a 17-bus distribution system is employed and the results are analysed.
Resumo:
This paper presents advanced optimization techniques for Mission Path Planning (MPP) of a UAS fitted with a spore trap to detect and monitor spores and plant pathogens. The UAV MPP aims to optimise the mission path planning search and monitoring of spores and plant pathogens that may allow the agricultural sector to be more competitive and more reliable. The UAV will be fitted with an air sampling or spore trap to detect and monitor spores and plant pathogens in remote areas not accessible to current stationary monitor methods. The optimal paths are computed using a Multi-Objective Evolutionary Algorithms (MOEAs). Two types of multi-objective optimisers are compared; the MOEA Non-dominated Sorting Genetic Algorithms II (NSGA-II) and Hybrid Game are implemented to produce a set of optimal collision-free trajectories in three-dimensional environment. The trajectories on a three-dimension terrain, which are generated off-line, are collision-free and are represented by using Bézier spline curves from start position to target and then target to start position or different position with altitude constraints. The efficiency of the two optimization methods is compared in terms of computational cost and design quality. Numerical results show the benefits of coupling a Hybrid-Game strategy to a MOEA for MPP tasks. The reduction of numerical cost is an important point as the faster the algorithm converges the better the algorithms is for an off-line design and for future on-line decisions of the UAV.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
TCP is a dominant protocol for consistent communication over the internet. It provides flow, congestion and error control mechanisms while using wired reliable networks. Its congestion control mechanism is not suitable for wireless links where data corruption and its lost rate are higher. The physical links are transparent from TCP that takes packet losses due to congestion only and initiates congestion handling mechanisms by reducing transmission speed. This results in wasting already limited available bandwidth on the wireless links. Therefore, there is no use to carry out research on increasing bandwidth of the wireless links until the available bandwidth is not optimally utilized. This paper proposed a hybrid scheme called TCP Detection and Recovery (TCP-DR) to distinguish congestion, corruption and mobility related losses and then instructs the data sending host to take appropriate action. Therefore, the link utilization is optimal while losses are either due to high bit error rate or mobility.
Resumo:
The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.