968 resultados para Classifier Combination Systems


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present world energy production is heavily relying on the combustion of solid fuels like coals, peat, biomass, municipal solid waste, whereas the share of renewable fuels is anticipated to increase in the future to mitigate climate change. In Finland, peat and wood are widely used for energy production. In any case, the combustion of solid fuels results in generation of several types of thermal conversion residues, such as bottom ash, fly ash, and boiler slag. The predominant residue type is determined by the incineration technology applied, while its composition is primarily relevant to the composition of fuels combusted. An extensive research has been conducted on technical suitability of ash for multiple recycling methods. Most of attention was drawn to the recycling of the coal combustion residues, as coal is the primary solid fuel consumed globally. The recycling methods of coal residues include utilization in a cement industry, in concrete manufacturing, and mine backfilling, to name few. Biomass combustion residues were also studied to some extent with forest fertilization, road construction, and road stabilization being the predominant utilization options. Lastly, residues form municipal solid waste incineration attracted more attention recently following the growing number of waste incineration plants globally. The recycling methods of waste incineration residues are the most limited due to its hazardous nature and varying composition, and include, among others, landfill construction, road construction, mine backfilling. In the study, environmental and economic aspects of multiple recycling options of thermal conversion residues generated within a case-study area were studied. The case-study area was South-East Finland. The environmental analysis was performed using an internationally recognized methodology — life cycle assessment. Economic assessment was conducted applying a widely used methodology — cost-benefit analysis. Finally, the results of the analyses were combined to enable easier comparison of the recycling methods. The recycling methods included the use of ash in forest fertilization, road construction, road stabilization, and landfill construction. Ash landfilling was set as a baseline scenario. Quantitative data about the amounts of ash generated and its composition was obtained from companies, their environmental reports, technical reports and other previously published literature. Overall, the amount of ash in the case-study area was 101 700 t. However, the data about 58 400 t of fly ash and 35 100 t of bottom ash and boiler slag were included in the study due to lack of data about leaching of heavy metals in some cases. The recycling methods were modelled according to the scientific studies published previously. Overall, the results of the study indicated that ash utilization for fertilization and neutralization of 17 600 ha of forest was the most economically beneficial method, which resulted in the net present value increase by 58% compared to ash landfilling. Regarding the environmental impact, the use of ash in the construction of 11 km of roads was the most attractive method with decreased environmental impact of 13% compared to ash landfilling. The least preferred method was the use of ash for landfill construction since it only enabled 11% increase of net present value, while inducing additional 1% of negative impact on the environment. Therefore, a following recycling route was proposed in the study. Where possible and legally acceptable, recycle fly and bottom ash for forest fertilization, which has strictest requirements out of all studied methods. If the quality of fly ash is not suitable for forest fertilization, then it should be utilized, first, in paved road construction, second, in road stabilization. Bottom ash not suitable for forest fertilization, as well as boiler slag, should be used in landfill construction. Landfilling should only be practiced when recycling by either of the methods is not possible due to legal requirements or there is not enough demand on the market. Current demand on ash and possible changes in the future were assessed in the study. Currently, the area of forest fertilized in the case-study are is only 451 ha, whereas about 17 600 ha of forest could be fertilized with ash generated in the region. Provided that the average forest fertilizing values in Finland are higher and the area treated with fellings is about 40 000 ha, the amount of ash utilized in forest fertilization could be increased. Regarding road construction, no new projects launched by the Center of Economic Development, Transport and the Environment in the case-study area were identified. A potential application can be found in the construction of private roads. However, no centralized data about such projects is available. The use of ash in stabilization of forest roads is not expected to increased in the future with a current downwards trend in the length of forest roads built. Finally, the use of ash in landfill construction is not a promising option due to the reducing number of landfills in operation in Finland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La maintenance du logiciel est une phase très importante du cycle de vie de celui-ci. Après les phases de développement et de déploiement, c’est celle qui dure le plus longtemps et qui accapare la majorité des coûts de l'industrie. Ces coûts sont dus en grande partie à la difficulté d’effectuer des changements dans le logiciel ainsi que de contenir les effets de ces changements. Dans cette perspective, de nombreux travaux ont ciblé l’analyse/prédiction de l’impact des changements sur les logiciels. Les approches existantes nécessitent de nombreuses informations en entrée qui sont difficiles à obtenir. Dans ce mémoire, nous utilisons une approche probabiliste. Des classificateurs bayésiens sont entraînés avec des données historiques sur les changements. Ils considèrent les relations entre les éléments (entrées) et les dépendances entre changements historiques (sorties). Plus spécifiquement, un changement complexe est divisé en des changements élémentaires. Pour chaque type de changement élémentaire, nous créons un classificateur bayésien. Pour prédire l’impact d’un changement complexe décomposé en changements élémentaires, les décisions individuelles des classificateurs sont combinées selon diverses stratégies. Notre hypothèse de travail est que notre approche peut être utilisée selon deux scénarios. Dans le premier scénario, les données d’apprentissage sont extraites des anciennes versions du logiciel sur lequel nous voulons analyser l’impact de changements. Dans le second scénario, les données d’apprentissage proviennent d’autres logiciels. Ce second scénario est intéressant, car il permet d’appliquer notre approche à des logiciels qui ne disposent pas d’historiques de changements. Nous avons réussi à prédire correctement les impacts des changements élémentaires. Les résultats ont montré que l’utilisation des classificateurs conceptuels donne les meilleurs résultats. Pour ce qui est de la prédiction des changements complexes, les méthodes de combinaison "Voting" et OR sont préférables pour prédire l’impact quand le nombre de changements à analyser est grand. En revanche, quand ce nombre est limité, l’utilisation de la méthode Noisy-Or ou de sa version modifiée est recommandée.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of discrete nonlinear systems represented by one dimensional mappings.As one dimensional interative maps represent Poincarre sections of higher dimensional flows,they offer a convenient means to understand the dynamical evolution of many physical systems.It highlighting the basic ideas of deterministic chaos.Qualitative and quantitative measures for the detection and characterization of chaos in nonlinear systems are discussed.Some simple mathematical models exhibiting chaos are presented.The bifurcation scenario and the possible routes to chaos are explained.It present the results of the numerical computational of the Lyapunov exponents (λ) of one dimensional maps.This thesis focuses on the results obtained by our investigations on combinations maps,scaling behaviour of the Lyapunov characteristic exponents of one dimensional maps and the nature of bifurcations in a discontinous logistic map.It gives a review of the major routes to chaos in dissipative systems,namely, Period-doubling ,Intermittency and Crises.This study gives a theoretical understanding of the route to chaos in discontinous systems.A detailed analysis of the dynamics of a discontinous logistic map is carried out, both analytically and numerically ,to understand the route it follows to chaos.The present analysis deals only with the case of the discontinuity parameter applied to the right half of the interval of mapping.A detailed analysis for the n –furcations of various periodicities can be made and a more general theory for the map with discontinuities applied at different positions can be on a similar footing

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The photoacoustic investigations carried out on different photonic materials are presented in this thesis. Photonic materials selected for the investigation are tape cast ceramics, muItilayer dielectric coatings, organic dye doped PVA films and PMMA matrix doped with dye mixtures. The studies are performed by the measurement of photoacoustic signal generated as a result of modulated cw laser irradiation of samples. The gas-microphone scheme is employed for the detection of photoacoustic signal. The different measurements reported here reveal the adaptability and utility of the PA technique for the characterization of photonic materials.Ceramics find applications in the field of microelectronics industry. Tape cast ceramics are the building blocks of many electronic components and certain ceramic tapes are used as thermal barriers. The thermal parameters of these tapes will not be the same as that of thin films of the same materials. Parameters are influenced by the presence of foreign bodies in the matrix and the sample preparation technique. Measurements are done on ceramic tapes of Zirconia, Zirconia-Alumina combination, barium titanate, barium tin titanate, silicon carbide, lead zirconate titanateil'Z'T) and lead magnesium niobate titanate(PMNPT). Various configurations viz. heat reflection geometry and heat transmission geometry of the photoacoustic technique have been used for the evaluation of different thermal parameters of the sample. Heat reflection geometry of the PA cell has been used for the evaluation of thermal effusivity and heat transmission geometry has been made use of in the evaluation of thermal diffusivity. From the thermal diffusivity and thermal effusivity values, thermal conductivity is also calculated. The calculated values are nearly the same as the values reported for pure materials. This shows the feasibility of photoacoustic technique for the thermal characterization of ceramic tapes.Organic dyes find applications as holographic recording medium and as active media for laser operations. Knowledge of the photochemical stability of the material is essential if it has to be used tor any of these applications. Mixing one dye with another can change the properties of the resulting system. Through careful mixing of the dyes in appropriate proportions and incorporating them in polymer matrices, media of required stability can be prepared. Investigations are carried out on Rhodamine 6GRhodamine B mixture doped PMMA samples. Addition of RhB in small amounts is found to stabilize Rh6G against photodegradation and addition of Rh6G into RhB increases the photosensitivity of the latter. The PA technique has been successfully employed for the monitoring of dye mixture doped PMMA sample. The same technique has been used for the monitoring of photodegradation ofa laser dye, cresyl violet doped polyvinyl alcohol also.Another important application of photoacoustic technique is in nondestructive evaluation of layered samples. Depth profiling capability of PA technique has been used for the non-destructive testing of multilayer dielectric films, which are highly reflecting in the wavelength range selected for investigations. Eventhough calculation of thickness of the film is not possible, number of layers present in the system can be found out using PA technique. The phase plot has clear step like discontinuities, the number of which coincides with the number of layers present in the multilayer stack. This shows the sensitivity of PA signal phase to boundaries in a layered structure. This aspect of PA signal can be utilized in non-destructive depth profiling of reflecting samples and for the identification of defects in layered structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses one of the emerging topics in Sonar Signal Processing.,viz.the implementation of a target classifier for the noise sources in the ocean, as the operator assisted classification turns out to be tedious,laborious and time consuming.In the work reported in this thesis,various judiciously chosen components of the feature vector are used for realizing the newly proposed Hierarchical Target Trimming Model.The performance of the proposed classifier has been compared with the Euclidean distance and Fuzzy K-Nearest Neighbour Model classifiers and is found to have better success rates.The procedures for generating the Target Feature Record or the Feature vector from the spectral,cepstral and bispectral features have also been suggested.The Feature vector ,so generated from the noise data waveform is compared with the feature vectors available in the knowledge base and the most matching pattern is identified,for the purpose of target classification.In an attempt to improve the success rate of the Feature Vector based classifier,the proposed system has been augmented with the HMM based Classifier.Institutions where both the classifier decisions disagree,a contention resolving mechanism built around the DUET algorithm has been suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a review of the work done on the dynamics of modulated logistic systems. Three different problems are treated, viz, the modulated logistic map, the parametrically perturbed logistic map and the combination map obtained by combining two maps of the quadratic family. Many of the interesting features displayed by these systems are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Light emitting polymers (LEP) have drawn considerable attention because of their numerous potential applications in the field of optoelectronic devices. Till date, a large number of organic molecules and polymers have been designed and devices fabricated based on these materials. Optoelectronic devices like polymer light emitting diodes (PLED) have attracted wide-spread research attention owing to their superior properties like flexibility, lower operational power, colour tunability and possibility of obtaining large area coatings. PLEDs can be utilized for the fabrication of flat panel displays and as replacements for incandescent lamps. The internal efficiency of the LEDs mainly depends on the electroluminescent efficiency of the emissive polymer such as quantum efficiency, luminance-voltage profile of LED and the balanced injection of electrons and holes. Poly (p-phenylenevinylene) (PPV) and regio-regular polythiophenes are interesting electro-active polymers which exhibit good electrical conductivity, electroluminescent activity and high film-forming properties. A combination of Red, Green and Blue emitting polymers is necessary for the generation of white light which can replace the high energy consuming incandescent lamps. Most of these polymers show very low solubility, stability and poor mechanical properties. Many of these light emitting polymers are based on conjugated extended chains of alternating phenyl and vinyl units. The intra-chain or inter-chain interactions within these polymer chains can change the emitted colour. Therefore an effective way of synthesizing polymers with reduced π-stacking, high solubility, high thermal stability and high light-emitting efficiency is still a challenge for chemists. New copolymers have to be effectively designed so as to solve these issues. Hence, in the present work, the suitability of a few novel copolymers with very high thermal stability, excellent solubility, intense light emission (blue, cyan and green) and high glass transition temperatures have been investigated to be used as emissive layers for polymer light emitting diodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning Disability (LD) is a classification including several disorders in which a child has difficulty in learning in a typical manner, usually caused by an unknown factor or factors. LD affects about 15% of children enrolled in schools. The prediction of learning disability is a complicated task since the identification of LD from diverse features or signs is a complicated problem. There is no cure for learning disabilities and they are life-long. The problems of children with specific learning disabilities have been a cause of concern to parents and teachers for some time. The aim of this paper is to develop a new algorithm for imputing missing values and to determine the significance of the missing value imputation method and dimensionality reduction method in the performance of fuzzy and neuro fuzzy classifiers with specific emphasis on prediction of learning disabilities in school age children. In the basic assessment method for prediction of LD, checklists are generally used and the data cases thus collected fully depends on the mood of children and may have also contain redundant as well as missing values. Therefore, in this study, we are proposing a new algorithm, viz. the correlation based new algorithm for imputing the missing values and Principal Component Analysis (PCA) for reducing the irrelevant attributes. After the study, it is found that, the preprocessing methods applied by us improves the quality of data and thereby increases the accuracy of the classifiers. The system is implemented in Math works Software Mat Lab 7.10. The results obtained from this study have illustrated that the developed missing value imputation method is very good contribution in prediction system and is capable of improving the performance of a classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Piecewise linear models systems arise as mathematical models of systems in many practical applications, often from linearization for nonlinear systems. There are two main approaches of dealing with these systems according to their continuous or discrete-time aspects. We propose an approach which is based on the state transformation, more particularly the partition of the phase portrait in different regions where each subregion is modeled as a two-dimensional linear time invariant system. Then the Takagi-Sugeno model, which is a combination of local model is calculated. The simulation results show that the Alpha partition is well-suited for dealing with such a system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to narrow the gap between two different control techniques: the continuous control and the discrete event control techniques DES. This gap can be reduced by the study of Hybrid systems, and by interpreting as Hybrid systems the majority of large-scale systems. In particular, when looking deeply into a process, it is often possible to identify interaction between discrete and continuous signals. Hybrid systems are systems that have both continuous, and discrete signals. Continuous signals are generally supposed continuous and differentiable in time, since discrete signals are neither continuous nor differentiable in time due to their abrupt changes in time. Continuous signals often represent the measure of natural physical magnitudes such as temperature, pressure etc. The discrete signals are normally artificial signals, operated by human artefacts as current, voltage, light etc. Typical processes modelled as Hybrid systems are production systems, chemical process, or continuos production when time and continuous measures interacts with the transport, and stock inventory system. Complex systems as manufacturing lines are hybrid in a global sense. They can be decomposed into several subsystems, and their links. Another motivation for the study of Hybrid systems is the tools developed by other research domains. These tools benefit from the use of temporal logic for the analysis of several properties of Hybrid systems model, and use it to design systems and controllers, which satisfies physical or imposed restrictions. This thesis is focused in particular types of systems with discrete and continuous signals in interaction. That can be modelled hard non-linealities, such as hysteresis, jumps in the state, limit cycles, etc. and their possible non-deterministic future behaviour expressed by an interpretable model description. The Hybrid systems treated in this work are systems with several discrete states, always less than thirty states (it can arrive to NP hard problem), and continuous dynamics evolving with expression: with Ki ¡ Rn constant vectors or matrices for X components vector. In several states the continuous evolution can be several of them Ki = 0. In this formulation, the mathematics can express Time invariant linear system. By the use of this expression for a local part, the combination of several local linear models is possible to represent non-linear systems. And with the interaction with discrete events of the system the model can compose non-linear Hybrid systems. Especially multistage processes with high continuous dynamics are well represented by the proposed methodology. Sate vectors with more than two components, as third order models or higher is well approximated by the proposed approximation. Flexible belt transmission, chemical reactions with initial start-up and mobile robots with important friction are several physical systems, which profits from the benefits of proposed methodology (accuracy). The motivation of this thesis is to obtain a solution that can control and drive the Hybrid systems from the origin or starting point to the goal. How to obtain this solution, and which is the best solution in terms of one cost function subject to the physical restrictions and control actions is analysed. Hybrid systems that have several possible states, different ways to drive the system to the goal and different continuous control signals are problems that motivate this research. The requirements of the system on which we work is: a model that can represent the behaviour of the non-linear systems, and that possibilities the prediction of possible future behaviour for the model, in order to apply an supervisor which decides the optimal and secure action to drive the system toward the goal. Specific problems can be determined by the use of this kind of hybrid models are: - The unity of order. - Control the system along a reachable path. - Control the system in a safe path. - Optimise the cost function. - Modularity of control The proposed model solves the specified problems in the switching models problem, the initial condition calculus and the unity of the order models. Continuous and discrete phenomena are represented in Linear hybrid models, defined with defined eighth-tuple parameters to model different types of hybrid phenomena. Applying a transformation over the state vector : for LTI system we obtain from a two-dimensional SS a single parameter, alpha, which still maintains the dynamical information. Combining this parameter with the system output, a complete description of the system is obtained in a form of a graph in polar representation. Using Tagaki-Sugeno type III is a fuzzy model which include linear time invariant LTI models for each local model, the fuzzyfication of different LTI local model gives as a result a non-linear time invariant model. In our case the output and the alpha measure govern the membership function. Hybrid systems control is a huge task, the processes need to be guided from the Starting point to the desired End point, passing a through of different specific states and points in the trajectory. The system can be structured in different levels of abstraction and the control in three layers for the Hybrid systems from planning the process to produce the actions, these are the planning, the process and control layer. In this case the algorithms will be applied to robotics ¡V a domain where improvements are well accepted ¡V it is expected to find a simple repetitive processes for which the extra effort in complexity can be compensated by some cost reductions. It may be also interesting to implement some control optimisation to processes such as fuel injection, DC-DC converters etc. In order to apply the RW theory of discrete event systems on a Hybrid system, we must abstract the continuous signals and to project the events generated for these signals, to obtain new sets of observable and controllable events. Ramadge & Wonham¡¦s theory along with the TCT software give a Controllable Sublanguage of the legal language generated for a Discrete Event System (DES). Continuous abstraction transforms predicates over continuous variables into controllable or uncontrollable events, and modifies the set of uncontrollable, controllable observable and unobservable events. Continuous signals produce into the system virtual events, when this crosses the bound limits. If this event is deterministic, they can be projected. It is necessary to determine the controllability of this event, in order to assign this to the corresponding set, , controllable, uncontrollable, observable and unobservable set of events. Find optimal trajectories in order to minimise some cost function is the goal of the modelling procedure. Mathematical model for the system allows the user to apply mathematical techniques over this expression. These possibilities are, to minimise a specific cost function, to obtain optimal controllers and to approximate a specific trajectory. The combination of the Dynamic Programming with Bellman Principle of optimality, give us the procedure to solve the minimum time trajectory for Hybrid systems. The problem is greater when there exists interaction between adjacent states. In Hybrid systems the problem is to determine the partial set points to be applied at the local models. Optimal controller can be implemented in each local model in order to assure the minimisation of the local costs. The solution of this problem needs to give us the trajectory to follow the system. Trajectory marked by a set of set points to force the system to passing over them. Several ways are possible to drive the system from the Starting point Xi to the End point Xf. Different ways are interesting in: dynamic sense, minimum states, approximation at set points, etc. These ways need to be safe and viable and RchW. And only one of them must to be applied, normally the best, which minimises the proposed cost function. A Reachable Way, this means the controllable way and safe, will be evaluated in order to obtain which one minimises the cost function. Contribution of this work is a complete framework to work with the majority Hybrid systems, the procedures to model, control and supervise are defined and explained and its use is demonstrated. Also explained is the procedure to model the systems to be analysed for automatic verification. Great improvements were obtained by using this methodology in comparison to using other piecewise linear approximations. It is demonstrated in particular cases this methodology can provide best approximation. The most important contribution of this work, is the Alpha approximation for non-linear systems with high dynamics While this kind of process is not typical, but in this case the Alpha approximation is the best linear approximation to use, and give a compact representation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the tarte relative to other objects was varied, the ration of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying geometric reconstruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small-scale dairy systems play an important role in the Mexican dairy sector and farm planning activities related to resource allocation have a significant impact on the profitability of such enterprises. Linear programming is a technique widely used for planning and ration formulation, and partial budgeting is a technique for assessing the impact of changes on the profitability of an enterprise. This study used both methods to optimise land use for forage production and nutrient availability, and to evaluate the economic impact of such changes in small-scale Mexican dairy systems. The model showed satisfactory performance when optimal solutions were compared with the traditional strategy. The strategy using fresh ryegrass, maize silage and oat hay, and the strategy using a combination of alfalfa hay, maize silage, fresh ryegrass and oat hay appeared attractive options for providing a better nutrient supply and maintaining a higher stocking rate throughout the year than the traditional strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a theoretical model of the torsional characteristics of parallel multi-part rope systems. In such systems, the ropes may cable, or wrap around each other, depending on the combination of applied torque, rope tension, length and spacing between the rope parts. Cabling constitutes a failure that might be retrievable but as such can seriously affect the performance of the rope system. The torsional characteristics of the system are very different before and after cabling, and theoretical models are given for both situations. Laboratory tests were performed on both two and four rope systems, with measurements being made of torque at rotations from 0 to 360 deg. Tests were run with different rope spacings, tensions and lengths and the results compared with predictions from the theoretical model. The conclusion from the test results was that the theoretical model predicts both the pre- and post-cabling torsional behaviour with an acceptable level of accuracy.