981 resultados para Test-problem Generator
Resumo:
The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen?s ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo?Stiefel methods.
New On-Line Excitation-System Ground Fault Location Method Tested in a 106 MVA Synchronous Generator
Resumo:
In this paper, a novel excitation-system ground-fault location method is described and tested in a 106 MVA synchronous machine. In this unit, numerous rotor ground-fault trips took place always about an hour after the synchronization to the network. However, when the field winding insulation was checked after the trips, there was no failure. The data indicated that the faults in the rotor were caused by centrifugal forces and temperature. Unexpectedly, by applying this new method, the failure was located in a cable between the excitation transformer and the automatic voltage regulator. In addition, several intentional ground faults were performed along the field winding with different fault resistance values, in order to test the accuracy of this method to locate defects in rotor windings of large generators. Therefore, this new on-line rotor ground-fault detection algorithm is tested in high-power synchronous generators with satisfactory results.
Resumo:
Se va a realizar un estudio de la codificación de imágenes sobre el estándar HEVC (high-effiency video coding). El proyecto se va a centrar en el codificador híbrido, más concretamente sobre la aplicación de la transformada inversa del coseno que se realiza tanto en codificador como en el descodificador. La necesidad de codificar vídeo surge por la aparición de la secuencia de imágenes como señales digitales. El problema principal que tiene el vídeo es la cantidad de bits que aparecen al realizar la codificación. Como consecuencia del aumento de la calidad de las imágenes, se produce un crecimiento exponencial de la cantidad de información a codificar. La utilización de las transformadas al procesamiento digital de imágenes ha aumentado a lo largo de los años. La transformada inversa del coseno se ha convertido en el método más utilizado en el campo de la codificación de imágenes y video. Las ventajas de la transformada inversa del coseno permiten obtener altos índices de compresión a muy bajo coste. La teoría de las transformadas ha mejorado el procesamiento de imágenes. En la codificación por transformada, una imagen se divide en bloques y se identifica cada imagen a un conjunto de coeficientes. Esta codificación se aprovecha de las dependencias estadísticas de las imágenes para reducir la cantidad de datos. El proyecto realiza un estudio de la evolución a lo largo de los años de los distintos estándares de codificación de video. Se analiza el codificador híbrido con más profundidad así como el estándar HEVC. El objetivo final que busca este proyecto fin de carrera es la realización del núcleo de un procesador específico para la ejecución de la transformada inversa del coseno en un descodificador de vídeo compatible con el estándar HEVC. Es objetivo se logra siguiendo una serie de etapas, en las que se va añadiendo requisitos. Este sistema permite al diseñador hardware ir adquiriendo una experiencia y un conocimiento más profundo de la arquitectura final. ABSTRACT. A study about the codification of images based on the standard HEVC (high-efficiency video coding) will be developed. The project will be based on the hybrid encoder, in particular, on the application of the inverse cosine transform, which is used for the encoder as well as for the decoder. The necessity of encoding video arises because of the appearance of the sequence of images as digital signals. The main problem that video faces is the amount of bits that appear when making the codification. As a consequence of the increase of the quality of the images, an exponential growth on the quantity of information that should be encoded happens. The usage of transforms to the digital processing of images has increased along the years. The inverse cosine transform has become the most used method in the field of codification of images and video. The advantages of the inverse cosine transform allow to obtain high levels of comprehension at a very low price. The theory of the transforms has improved the processing of images. In the codification by transform, an image is divided in blocks and each image is identified to a set of coefficients. This codification takes advantage of the statistic dependence of the images to reduce the amount of data. The project develops a study of the evolution along the years of the different standards in video codification. In addition, the hybrid encoder and the standard HEVC are analyzed more in depth. The final objective of this end of degree project is the realization of the nucleus from a specific processor for the execution of the inverse cosine transform in a decoder of video that is compatible with the standard HEVC. This objective is reached following a series of stages, in which requirements are added. This system allows the hardware designer to acquire a deeper experience and knowledge of the final architecture.
Resumo:
There are many models in the literature that have been proposed in the last decades aimed at assessing the reliability, availability and maintainability (RAM) of safety equipment, many of them with a focus on their use to assess the risk level of a technological system or to search for appropriate design and/or surveillance and maintenance policies in order to assure that an optimum level of RAM of safety systems is kept during all the plant operational life. This paper proposes a new approach for RAM modelling that accounts for equipment ageing and maintenance and testing effectiveness of equipment consisting of multiple items in an integrated manner. This model is then used to perform the simultaneous optimization of testing and maintenance for ageing equipment consisting of multiple items. An example of application is provided, which considers a simplified High Pressure Injection System (HPIS) of a typical Power Water Reactor (PWR). Basically, this system consists of motor driven pumps (MDP) and motor operated valves (MOV), where both types of components consists of two items each. These components present different failure and cause modes and behaviours, and they also undertake complex test and maintenance activities depending on the item involved. The results of the example of application demonstrate that the optimization algorithm provide the best solutions when the optimization problem is formulated and solved considering full flexibility in the implementation of testing and maintenance activities taking part of such an integrated RAM model.
Resumo:
The dwarf somaclonal variant is a major problem affecting micropropagation of the banana cultivar Williams (Musa spp. AAA; subgroup Cavendish). This problem arises from genetic changes that occur during the tissue culture process. Early identification of this problem is difficult and propagators must wait until plants are ex vitro in order to visualise the dwarfism phenotype. In this study, we have improved a SCAR-based molecular diagnostic technique, developed by Damasco et al. [Acta Hortic. 461 (1997) 157], for the early identification of dwarf off-types. We have included a positive internal control in a multiplex PCR and adapted the technique for use with small amounts of fresh in vitro leaf material as PCR template. The control product is a 500 bp fragment from 18S rRNA and is amplified in all tissues irrespective of phenotype. The use of small in vitro leaf material removing the need for genomic DNA extraction. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
This study examined the genetic and environmental relationships among 5 academic achievement skills of a standardized test of academic achievement, the Queensland Core Skills Test (QCST; Queensland Studies Authority, 2003a). QCST participants included 182 monozygotic pairs and 208 dizygotic pairs (mean 17 years +/- 0.4 standard deviation). IQ data were included in the analysis to correct for ascertainment bias. A genetic general factor explained virtually all genetic variance in the component academic skills scores, and accounted for 32% to 73% of their phenotypic variances. It also explained 56% and 42% of variation in Verbal IQ and Performance IQ respectively, suggesting that this factor is genetic g. Modest specific genetic effects were evident for achievement in mathematical problem solving and written expression. A single common factor adequately explained common environmental effects, which were also modest, and possibly due to assortative mating. The results suggest that general academic ability, derived from genetic influences and to a lesser extent common environmental influences, is the primary source of variation in component skills of the QCST.
Resumo:
Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.
Resumo:
Magnetoencephalography (MEG) is a non-invasive brain imaging technique with the potential for very high temporal and spatial resolution of neuronal activity. The main stumbling block for the technique has been that the estimation of a neuronal current distribution, based on sensor data outside the head, is an inverse problem with an infinity of possible solutions. Many inversion techniques exist, all using different a-priori assumptions in order to reduce the number of possible solutions. Although all techniques can be thoroughly tested in simulation, implicit in the simulations are the experimenter's own assumptions about realistic brain function. To date, the only way to test the validity of inversions based on real MEG data has been through direct surgical validation, or through comparison with invasive primate data. In this work, we constructed a null hypothesis that the reconstruction of neuronal activity contains no information on the distribution of the cortical grey matter. To test this, we repeatedly compared rotated sections of grey matter with a beamformer estimate of neuronal activity to generate a distribution of mutual information values. The significance of the comparison between the un-rotated anatomical information and the electrical estimate was subsequently assessed against this distribution. We found that there was significant (P < 0.05) anatomical information contained in the beamformer images across a number of frequency bands. Based on the limited data presented here, we can say that the assumptions behind the beamformer algorithm are not unreasonable for the visual-motor task investigated.
Resumo:
An optical autocorrelator grown on a (211)B GaAs substrate that uses visible surface-emitted second-harmonic generation is demonstrated. The (211)B orientation needs TE mode excitation only, thus eliminating the problem of the beating between the TE and TM modes that is required for (100)-grown devices; it also has the advantage of giving higher upconversion efficiency than (111) growth. Values of waveguide loss and the difference in the effective refractive index between the TE(0) and TE(1) modes were also obtained from the autocorrelation experiment.
Resumo:
This study is primarily concerned with the problem of break-squeal in disc brakes, using moulded organic disc pads. Moulded organic friction materials are complex composites and due to this complexity it was thought that they are unlikely to be of uniform composition. Variation in composition would under certain conditions of the braking system, cause slight changes in its vibrational characteristics thus causing resonance in the high audio-frequency range. Dynamic mechanical propertes appear the most likely parameters to be related to a given composition's tendency to promote squeal. Since it was necessary to test under service conditions a review was made of all the available commercial test instruments but as none were suitable it was necessary to design and develop a new instrument. The final instrument design, based on longitudinal resonance, enabled modulus and damping to be determined over a wide range of temperatures and frequencies. This apparatus has commercial value since it is not restricted to friction material testing. Both used and unused pads were tested and although the cause of brake squeal was not definitely established, the results enabled formulation of a tentative theory of the possible conditions for brake squeal. The presence of a temperature of minimum damping was indicated which may be of use to braking design engineers. Some auxilIary testing was also performed to establish the effect of water, oil and brake fluid and also to determine the effect of the various components of friction materials.
Resumo:
An analogous thinking task was used to test Nemeth's Convergent–Divergent theory of majority and minority influence. Participants read a (base) problem and one of three solutions (one of which is considered the ‘best' solution). They then generated solutions to a second (target) problem which shared similar structural features to the first problem. Due to the similarities between problems, the solution given to the first problem can be used as an analogy in solving the second. In contrast to Nemeth's theory, when the solution to the base problem was endorsed by a numerical majority there was not an increase in analogy-transfer in solving the target problem. However, in support of Nemeth's theory, when the base solution was supported by a numerical minority then the participants were more likely to generate the ‘best' solution to the target problem regardless of which base solution they were given. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.
Resumo:
An optical autocorrelator grown on a (211)B GaAs substrate that uses visible surface-emitted second-harmonic generation is demonstrated. The (211)B orientation needs TE mode excitation only, thus eliminating the problem of the beating between the TE and TM modes that is required for (100)-grown devices; it also has the advantage of giving higher upconversion efficiency than (111) growth. Values of waveguide loss and the difference in the effective refractive index between the TE(0) and TE(1) modes were also obtained from the autocorrelation experiment.
Resumo:
Transportation service operators are witnessing a growing demand for bi-directional movement of goods. Given this, the following thesis considers an extension to the vehicle routing problem (VRP) known as the delivery and pickup transportation problem (DPP), where delivery and pickup demands may occupy the same route. The problem is formulated here as the vehicle routing problem with simultaneous delivery and pickup (VRPSDP), which requires the concurrent service of the demands at the customer location. This formulation provides the greatest opportunity for cost savings for both the service provider and recipient. The aims of this research are to propose a new theoretical design to solve the multi-objective VRPSDP, provide software support for the suggested design and validate the method through a set of experiments. A new real-life based multi-objective VRPSDP is studied here, which requires the minimisation of the often conflicting objectives: operated vehicle fleet size, total routing distance and the maximum variation between route distances (workload variation). The former two objectives are commonly encountered in the domain and the latter is introduced here because it is essential for real-life routing problems. The VRPSDP is defined as a hard combinatorial optimisation problem, therefore an approximation method, Simultaneous Delivery and Pickup method (SDPmethod) is proposed to solve it. The SDPmethod consists of three phases. The first phase constructs a set of diverse partial solutions, where one is expected to form part of the near-optimal solution. The second phase determines assignment possibilities for each sub-problem. The third phase solves the sub-problems using a parallel genetic algorithm. The suggested genetic algorithm is improved by the introduction of a set of tools: genetic operator switching mechanism via diversity thresholds, accuracy analysis tool and a new fitness evaluation mechanism. This three phase method is proposed to address the shortcoming that exists in the domain, where an initial solution is built only then to be completely dismantled and redesigned in the optimisation phase. In addition, a new routing heuristic, RouteAlg, is proposed to solve the VRPSDP sub-problem, the travelling salesman problem with simultaneous delivery and pickup (TSPSDP). The experimental studies are conducted using the well known benchmark Salhi and Nagy (1999) test problems, where the SDPmethod and RouteAlg solutions are compared with the prominent works in the VRPSDP domain. The SDPmethod has demonstrated to be an effective method for solving the multi-objective VRPSDP and the RouteAlg for the TSPSDP.
Resumo:
The primary aim was to examine to influence of subclinical disordered eating on autobiographical memory specificity (AMS) and social problem solving (SPS). A further aim was to establish if AMS mediated the relationship between eating psychopathology and SPS. A non-clinical sample of 52 females completed the autobiographical memory test (AMT), where they were asked to retrieve specific memories of events from their past in response to cue words, and the means-end problem-solving task (MEPS), where they were asked to generate means of solving a series of social problems. Participants also completed the Eating Disorders Inventory (EDI) and Hospital Anxiety and Depression Scale. After controlling for mood, high scores on the EDI subscales, particularly Drive-for-Thinness, were associated with the retrieval of fewer specific and a greater proportion of categorical memories on the AMT and with the generation of fewer and less effective means on the MEPS. Memory specificity fully mediated the relationship between eating psychopathology and SPS. These findings have implications for individuals exhibiting high levels of disordered eating, as poor AMS and SPS are likely to impact negatively on their psychological wellbeing and everyday social functioning and could represent a risk factor for the development of clinically significant eating disorders.