963 resultados para dynamic methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic algorithms are commonly used to solve combinatorial optimizationproblems. The implementation evolves using genetic operators (crossover, mutation,selection, etc.). Anyway, genetic algorithms like some other methods have parameters(population size, probabilities of crossover and mutation) which need to be tune orchosen.In this paper, our project is based on an existing hybrid genetic algorithmworking on the multiprocessor scheduling problem. We propose a hybrid Fuzzy-Genetic Algorithm (FLGA) approach to solve the multiprocessor scheduling problem.The algorithm consists in adding a fuzzy logic controller to control and tunedynamically different parameters (probabilities of crossover and mutation), in anattempt to improve the algorithm performance. For this purpose, we will design afuzzy logic controller based on fuzzy rules to control the probabilities of crossoverand mutation. Compared with the Standard Genetic Algorithm (SGA), the resultsclearly demonstrate that the FLGA method performs significantly better.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: For the evaluation of the energetic performance of combined renewable heating systems that supply space heat and domestic hot water for single family houses, dynamic behaviour, component interactions, and control of the system play a crucial role and should be included in test methods. Methods: New dynamic whole system test methods were developed based on “hardware in the loop” concepts. Three similar approaches are described and their differences are discussed. The methods were applied for testing solar thermal systems in combination with fossil fuel boilers (heating oil and natural gas), biomass boilers, and/or heat pumps. Results: All three methods were able to show the performance of combined heating systems under transient operating conditions. The methods often detected unexpected behaviour of the tested system that cannot be detected based on steady state performance tests that are usually applied to single components. Conclusion: Further work will be needed to harmonize the different test methods in order to reach comparable results between the different laboratories. Practice implications: A harmonized approach for whole system tests may lead to new test standards and improve the accuracy of performance prediction as well as reduce the need for field tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic system test methods for heating systems were developed and applied by the institutes SERC and SP from Sweden, INES from France and SPF from Switzerland already before the MacSheep project started. These test methods followed the same principle: a complete heating system – including heat generators, storage, control etc., is installed on the test rig; the test rig software and hardware simulates and emulates the heat load for space heating and domestic hot water of a single family house, while the unit under test has to act autonomously to cover the heat demand during a representative test cycle. Within the work package 2 of the MacSheep project these similar – but different – test methods were harmonized and improved. The work undertaken includes:  • Harmonization of the physical boundaries of the unit under test. • Harmonization of the boundary conditions of climate and load. • Definition of an approach to reach identical space heat load in combination with an autonomous control of the space heat distribution by the unit under test. • Derivation and validation of new six day and a twelve day test profiles for direct extrapolation of test results.   The new harmonized test method combines the advantages of the different methods that existed before the MacSheep project. The new method is a benchmark test, which means that the load for space heating and domestic hot water preparation will be identical for all tested systems, and that the result is representative for the performance of the system over a whole year. Thus, no modelling and simulation of the tested system is needed in order to obtain the benchmark results for a yearly cycle. The method is thus also applicable to products for which simulation models are not available yet. Some of the advantages of the new whole system test method and performance rating compared to the testing and energy rating of single components are:  • Interaction between the different components of a heating system, e.g. storage, solar collector circuit, heat pump, control, etc. are included and evaluated in this test. • Dynamic effects are included and influence the result just as they influence the annual performance in the field. • Heat losses are influencing the results in a more realistic way, since they are evaluated under "real installed" and representative part-load conditions rather than under single component steady state conditions.   The described method is also suited for the development process of new systems, where it replaces time-consuming and costly field testing with the advantage of a higher accuracy of the measured data (compared to the typically used measurement equipment in field tests) and identical, thus comparable boundary conditions. Thus, the method can be used for system optimization in the test bench under realistic operative conditions, i.e. under relevant operating environment in the lab.   This report describes the physical boundaries of the tested systems, as well as the test procedures and the requirements for both the unit under test and the test facility. The new six day and twelve day test profiles are also described as are the validation results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a scalable and fault-tolerant job scheduling framework for grid computing. The proposed framework loosely couples a dynamic job scheduling approach with the hybrid replications approach to schedule jobs efficiently while at the same time providing fault-tolerance. The novelty of the proposed framework is that it uses passive replication approach under high system load and active replication approach under low system loads. The switch between these two replication methods is also done dynamically and transparently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real vehicle collision experiments on full-scale road safety barriers are important to determine the outcome of a vehicle versus barrier impact accident. However, such experiments require large investment of time and money. Numerical simulation has therefore been imperative as an alternative method for testing concrete barriers. In this research, spring subgrade models were first developed to simulate the ground boundary of concrete barriers. Both heavy trucks and concrete barriers were modeled using finite element methods (FEM) to simulate dynamic collision performances. Comparison of the results generated from computer simulations and on-site full-scale experiments demonstrated that the developed models could be applied to simulate the collision of heavy trucks with concrete barriers to provide the data to design new road safety barriers and analyze existing ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a novel feature reduction approach to group words hierarchically into clusters which can then be used as new features for document classification. Initially, each word constitutes a cluster. We calculate the mutual confidence between any two different words. The pair of clusters containing the two words with the highest mutual confidence are combined into a new cluster. This process of merging is iterated until all the mutual confidences between the un-processed pair of words are smaller than a predefined threshold or only one cluster exists. In this way, a hierarchy of word clusters is obtained. The user can decide the clusters, from a certain level, to be used as new features for document classification. Experimental results have shown that our method can perform better than other methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: A nonlinear dynamic systems model has previously been proposed to explain pacing strategies employed during exercise.

Purpose: This study was conducted to examine the pacing strategies used under varying conditions during the cycle phase of an Ironman triathlon.

Methods: The bicycles of six well-trained male triathletes were equipped with SRM power meters set to record power output, cadence, speed, and heart rate. The flat, three-lap, out-and-back cycle course, coupled with relatively consistent wind conditions (17-30 km·h-1), enabled comparisons to be made between three consecutive 60-km laps and relative wind direction (headwind vs tailwind).

Results: Participants finished the cycle phase (180 km) with consistently fast performance times (5 h, 11 ± 2 min; top 10% of all finishers). Average power output (239 ± 25 to 203 ± 20 W), cadence (89 ± 6 to 82 ± 8 rpm), and speed (36.5 ± 0.8 to 33.1 ± 0.8 km·h-1) all significantly decreased with increasing number of laps (P < 0.05). These variables, however, were not significantly different between headwind and tailwind sections. The deviation (SD) in power output and cadence did not change with increasing number of laps; however, the deviations in torque (6.8 ± 1.6 and 5.8 ± 1.3 N·m) and speed (2.1 ± 0.5 and 1.6 ± 0.3 km·h-1) were significantly greater under headwind compared with tailwind conditions, respectively. The median power frequency tended to be lower in headwind (0.0480 ± 0.0083) compared with tailwind (0.0531 ± 0.0101) sections.

Conclusion:
These data show evidence that a nonlinear dynamic pacing strategy is used by well-trained triathletes throughout various segments and conditions of the Ironman cycle phase. Moreover, an increased variation in torque and speed was found in the headwind versus the tailwind condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis provides a unified and comprehensive treatment of the fuzzy neural networks as the intelligent controllers. This work has been motivated by a need to develop the solid control methodologies capable of coping with the complexity, the nonlinearity, the interactions, and the time variance of the processes under control. In addition, the dynamic behavior of such processes is strongly influenced by the disturbances and the noise, and such processes are characterized by a large degree of uncertainty. Therefore, it is important to integrate an intelligent component to increase the control system ability to extract the functional relationships from the process and to change such relationships to improve the control precision, that is, to display the learning and the reasoning abilities. The objective of this thesis was to develop a self-organizing learning controller for above processes by using a combination of the fuzzy logic and the neural networks. An on-line, direct fuzzy neural controller using the process input-output measurement data and the reference model with both structural and parameter tuning has been developed to fulfill the above objective. A number of practical issues were considered. This includes the dynamic construction of the controller in order to alleviate the bias/variance dilemma, the universal approximation property, and the requirements of the locality and the linearity in the parameters. Several important issues in the intelligent control were also considered such as the overall control scheme, the requirement of the persistency of excitation and the bounded learning rates of the controller for the overall closed loop stability. Other important issues considered in this thesis include the dependence of the generalization ability and the optimization methods on the data distribution, and the requirements for the on-line learning and the feedback structure of the controller. Fuzzy inference specific issues such as the influence of the choice of the defuzzification method, T-norm operator and the membership function on the overall performance of the controller were also discussed. In addition, the e-completeness requirement and the use of the fuzzy similarity measure were also investigated. Main emphasis of the thesis has been on the applications to the real-world problems such as the industrial process control. The applicability of the proposed method has been demonstrated through the empirical studies on several real-world control problems of industrial complexity. This includes the temperature and the number-average molecular weight control in the continuous stirred tank polymerization reactor, and the torsional vibration, the eccentricity, the hardness and the thickness control in the cold rolling mills. Compared to the traditional linear controllers and the dynamically constructed neural network, the proposed fuzzy neural controller shows the highest promise as an effective approach to such nonlinear multi-variable control problems with the strong influence of the disturbances and the noise on the dynamic process behavior. In addition, the applicability of the proposed method beyond the strictly control area has also been investigated, in particular to the data mining and the knowledge elicitation. When compared to the decision tree method and the pruned neural network method for the data mining, the proposed fuzzy neural network is able to achieve a comparable accuracy with a more compact set of rules. In addition, the performance of the proposed fuzzy neural network is much better for the classes with the low occurrences in the data set compared to the decision tree method. Thus, the proposed fuzzy neural network may be very useful in situations where the important information is contained in a small fraction of the available data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a method for conducting dynamic due diligence to evaluate Mergers and Acquisitions; demonstrates its effectiveness in a particular case; and extrapolates its theoretical and practical implications to the general case. It may be called the ‘ECIPP’ method - an acronym for: Establishing mandates; Creating projections; Identifying issues; Prioritizing procedures and Performing them.

Two established alternative due diligence methods are examined. The prevailing finance-theory-based procedure has the virtues of simplicity and elegance; the vice is abstraction. The prevailing practitioner-based regime has the virtues of thoroughness and concreteness but the vices of rigidity and inefficiency. Resolving the tradeoffs inherent in both static prescriptions provides an opportunity for a dynamic, innovative approach derived from grounded theory and an application of Hindle’s (1993) theory of venture renaissance through application of an enhanced paradigm of Entrepreneurial Business Planning. The ECIPP method retains simplicity, concreteness and thoroughness but eliminates abstraction, rigidity and inefficiency.

This is demonstrated in a case. ChildCo’s CEO had only one month to complete his M&A evaluation; no expertise or previous experience; severely limited budget for the exercise and had been flatly informed by prevailing M&A experts that what he wanted could not be done. Using the ECIPP method, the CEO and the author did it: on time, within budget and to the satisfaction of a previously skeptical board of one of the world’s largest multi-national companies including arguably the world’s most professional corporate M&A division.

The replicability logic of the case research permits two generalisations. (1) ECIPP extends the range and utility of Entrepreneurial Business Planning as a management technology, well beyond the constraints to which it is usually confined. (2) The ECIPP method of dynamic due diligence is an innovation worthy of mature consideration and further investigation by theorists and practitioners in the M&A field, in the disciplines of both Finance and Entrepreneurship and, well beyond, in the realms of general management theory, methodology and practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data-based modeling of the haptic interaction simulation is a growing trend in research. These techniques offer a quick alternative to parametric modeling of the simulation. So far, most of the use of the data-based techniques was applied to static simulations. This paper introduces how to use data-based model in dynamic simulations. This ensures realistic behavior and produce results that are very close to parametric modeling. The results show that a quick and accurate response can be achieved using the proposed methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveillance applications in private environments such as smart houses require a privacy management policy if such systems are to be accepted by the occupants of the environment. This is due to the invasive nature of surveillance, and the private nature of the home. In this article, we propose a framework for dynamically altering the privacy policy applied to the monitoring of a smart house based on the situation within the environment. Initially the situation, or context, within the environment is determined; we identify several factors for determining environmental context, and propose methods to quantify the context using audio and binary sensor data. The context is then mapped to an appropriate privacy policy, which is implemented by applying data hiding techniques to control access to data gathered from various information sources. The significance of this work lies in the examination of privacy issues related to assisted-living smart house environments. A single privacy policy in such applications would be either too restrictive for an observer, for example, a carer, or too invasive for the occupants. We address this by proposing a dynamic method, with the aim of decreasing the invasiveness of the technology, while retaining the purpose of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider two methods for automatically determining values for thresholding edge maps. In contrast to most other related work they are based on the figural rather than statistical properties of the edges. The first approach applies a local edge evaluation measure based on edge continuity and edge thinness to determine the threshold on edge magnitude. The second approach is more global and considers complete connected edge curves. The curves are mapped onto an edge curve length/average magnitude feature space, and a robust technique is developed to partition this feature space into true and false edge regions. A quantitative assessment of the results on synthetic data shows that the global method performs better than the local method. Furthermore, a qualitative assessment of its application to a variety of real images shows that it reliably produces good results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we discuss combining incremental learning and incremental recognition to classify patterns consisting of multiple objects, each represented by multiple spatio-temporal features. Importantly the technique allows for ambiguity in terms of the positions of the start and finish of the pattern. This involves a progressive classification which considers the data at each time instance in the query and thus provides a probable answer before all the query information becomes available. We present two methods that combine incremental learning and incremental recognition: a time instance method and an overall best match method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider two methods for automatically determining values for thresholding edge maps. Rather than use statistical methods they are based on the figural properties of the edges. Two approaches are taken. We investigate applying an edge evaluation measure based on edge continuity and edge thinness to determine the threshold on edge strength. However, the technique is not valid when applied to edge detector outputs that are one-pixel wide. In this case, we use a measure based on work by Lowe for assessing edges. This measure is based on length and average strength of complete linked edge lists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Slab-girder bridges are widely used in Australia. The shear connection between reinforced concrete slab and steel girder plays an important role in composite action. In order to test the suitability and efficiency of various vibration-based damage identification methods to assess the integrity of the structure, a scaled composite bridge model was constructed in the laboratory. Some removable shear connectors were specially designed and fabricated to link the beam and slab that were cast separately. In this test, two static loads were acted in the 1/3 points of the structure. In the first stage, dynamic test was conducted under different damage scenarios, where a number of shear connectors were removed step by step. In the second stage, the static load is increased gradually until concrete slab cracked. Static tests were conducted continuously to monitor the deflection and loading on the beam. Dynamic test was carried out before and after concrete cracking. Both static and dynamic results can be used to identify damage in the structure.