962 resultados para Constrained Minimization
Resumo:
Vehicle operations in underwater environments are often compromised by poor visibility conditions. For instance, the perception range of optical devices is heavily constrained in turbid waters, thus complicating navigation and mapping tasks in environments such as harbors, bays, or rivers. A new generation of high-definition forward-looking sonars providing acoustic imagery at high frame rates has recently emerged as a promising alternative for working under these challenging conditions. However, the characteristics of the sonar data introduce difficulties in image registration, a key step in mosaicing and motion estimation applications. In this work, we propose the use of a Fourier-based registration technique capable of handling the low resolution, noise, and artifacts associated with sonar image formation. When compared to a state-of-the art region-based technique, our approach shows superior performance in the alignment of both consecutive and nonconsecutive views as well as higher robustness in featureless environments. The method is used to compute pose constraints between sonar frames that, integrated inside a global alignment framework, enable the rendering of consistent acoustic mosaics with high detail and increased resolution. An extensive experimental section is reported showing results in relevant field applications, such as ship hull inspection and harbor mapping
Resumo:
Feedback-related negativity (FRN) is an ERP component that distinguishes positive from negative feedback. FRN has been hypothesized to be the product of an error signal that may be used to adjust future behavior. In addition, associative learning models assume that the trial-to-trial learning of cueoutcome mappings involves the minimization of an error term. This study evaluated whether FRN is a possible electrophysiological correlate of this error term in a predictive learning task where human subjects were asked to learn different cueoutcome relationships. Specifically, we evaluated the sensitivity of the FRN to the course of learning when different stimuli interact or compete to become a predictor of certain outcomes. Importantly, some of these cues were blocked by more informative or predictive cues (i.e., the blocking effect). Interestingly, the present results show that both learning and blocking affect the amplitude of the FRN component. Furthermore, independent analyses of positive and negative feedback event-related signals showed that the learning effect was restricted to the ERP component elicited by positive feedback. The blocking test showed differences in the FRN magnitude between a predictive and a blocked cue. Overall, the present results show that ERPs that are related to feedback processing correspond to the main predictions of associative learning models. ■
Resumo:
An important issue in language learning is how new words are integrated in the brain representations that sustain language processing. To identify the brain regions involved in meaning acquisition and word learning, we conducted a functional magnetic resonance imaging study. Young participants were required to deduce the meaning of a novel word presented within increasingly constrained sentence contexts that were read silently during the scanning session. Inconsistent contexts were also presented in which no meaning could be assigned to the novel word. Participants showed meaning acquisition in the consistent but not in the inconsistent condition. A distributed brain network was identified comprising the left anterior inferior frontal gyrus (BA 45), the middle temporal gyrus (BA 21), the parahippocampal gyrus, and several subcortical structures (the thalamus and the striatum). Drawing on previous neuroimaging evidence, we tentatively identify the roles of these brain areas in the retrieval, selection, and encoding of the meaning.
Resumo:
En aquests últims anys, són moltes les empreses que han optat per la utilització de sistemes de gestió normalitzats, per a garantir la rendibilitat i fiabilitat dels resultats de la implantació del sistema de gestió en qüestió. A la dècada dels 90 va ser quan la implantació de sistemes de gestió va començar a ser important en la majoria de sectors econòmics. L’evolució en els sistemes de gestió a trets generals va iniciar-se primerament en l’àmbit de la qualitat, seguidament en la gestió ambiental i en última instància en la prevenció de riscos laborals. Aquests tres tipus de sistemes de gestió, en els últims anys s’han anat integrant, de manera que s’han reduït els recursos i els esforços emprats en la gestió, millorant significativament l’eficàcia i l’eficiència d’aquests sistemes. L’objectiu principal que persegueix aquest projecte, és definir un sistema de gestió que permeti a l’empresa conduir les seves activitats de forma simplificada i ordenada, i que alhora faciliti la informació necessària per a corregir i millorar les activitats. Un altre objectiu que pretén aconseguir aquest projecte, és el de dissenyar un SGI que aprofiti les sinèrgies generades en els diferents àmbits de la pròpia empresa i fomenti les interaccions entre els diferents nivells de l’organització. En conseqüència, millorarà de forma important els fluxos d’informació dins de l’empresa minimitzant els esforços i la pèrdua d’informació. El mètode escollit per a la implantació del SGI, ha estat la Gestió per Processos, la qual es basa en la definició i seguiment dels processos de l’empresa, partint de les necessitats del client i acabant quan aquestes estan satisfetes. En conclusió, a la finalització del present projecte s’obtindrà un SGI, amb tots els processos de l’empresa definits i implantats, que doni compliment a les normes UNEEN-ISO 9001:00, UNE-EN-ISO 14001:04 i OHSAS 18001:07. Aquest SGI, que s’ha realitzat des d’un punt de vista documental i teòric, suposarà una millora de l’eficàcia operativa dels processos i una important millora competitiva de l’empresa.
Resumo:
In this work an analytical methodology for the determination of relevant physicochemical parameters of prato cheese is reported, using infrared spectroscopy (DRIFT) and partial least squares regression (PLS). Several multivariate models were developed, using different spectral regions and preprocessing routines. In general, good precision and accuracy was observed for all studied parameters (fat, protein, moisture, total solids, ashes and pH) with standard deviations comparable with those provided by the conventional methodologies. The implantation of this multivariate routine involves significant analytical advantages, including reduction of cost and time of analysis, minimization of human errors, and elimination of chemical residues.
Resumo:
This paper reports the development of an easy, fast and effective procedure for the verification of the ideal gas law in splitless injection systems in order to improve the response. Results of a group of pesticides were used to demonstrate the suitability of the approach. The procedure helps establish experimental parameters through theoretical aspects. The improved instrumental response allowed extraction with lower sample volumes, the minimization of time and costs and the simplification of sample preparation.
Resumo:
The determination of pesticide residues in water samples by Liquid Chromatography require sample preparation for extraction and enrichment of the analytes with the minimization of interferences to achieve adequate detection limits. The Solid Phase Extraction (SPE), Solid Phase Microextraction (SPME), Stir Bar Sorptive Extraction (SBSE) and Dispersive Liquid-Liquid Microextraction (DLLME) techniques have been widely used for extraction of pesticides in water. In this review, the principles of these sample preparation techniques associated with the analysis by Liquid Chromatography with Diode Array Detection (LC-DAD) or Mass Spectrometry (LC-MS) are described and an overview of several applications were presented and discussed.
Resumo:
A methodology is proposed for explaining one of the central questions in the teaching of general chemistry courses to freshman students: why do chemical transformations occur? The answer to this question is based on thermodynamics but we propose arriving at an answer in a more intuitive way by using computational tools in a bid to increase the motivation of students for learning chemistry.
Resumo:
The aim of the thesis is to investigate the hybrid LC filter behavior in modern power drives; to analyze the influence of such a du/dt filter on the control system stability. With the implementation of the inverter output RLC filter the motor control becomes more complicated. And during the design process the influence of the filter on the motor should be considered and the filter RLC parameters should be constrained.
Resumo:
The maximum realizable power throughput of power electronic converters may be limited or constrained by technical or economical considerations. One solution to this problemis to connect several power converter units in parallel. The parallel connection can be used to increase the current carrying capacity of the overall system beyond the ratings of individual power converter units. Thus, it is possible to use several lower-power converter units, produced in large quantities, as building blocks to construct high-power converters in a modular manner. High-power converters realized by using parallel connection are needed for example in multimegawatt wind power generation systems. Parallel connection of power converter units is also required in emerging applications such as photovoltaic and fuel cell power conversion. The parallel operation of power converter units is not, however, problem free. This is because parallel-operating units are subject to overcurrent stresses, which are caused by unequal load current sharing or currents that flow between the units. Commonly, the term ’circulatingcurrent’ is used to describe both the unequal load current sharing and the currents flowing between the units. Circulating currents, again, are caused by component tolerances and asynchronous operation of the parallel units. Parallel-operating units are also subject to stresses caused by unequal thermal stress distribution. Both of these problemscan, nevertheless, be handled with a proper circulating current control. To design an effective circulating current control system, we need information about circulating current dynamics. The dynamics of the circulating currents can be investigated by developing appropriate mathematical models. In this dissertation, circulating current models aredeveloped for two different types of parallel two-level three-phase inverter configurations. Themodels, which are developed for an arbitrary number of parallel units, provide a framework for analyzing circulating current generation mechanisms and developing circulating current control systems. In addition to developing circulating current models, modulation of parallel inverters is considered. It is illustrated that depending on the parallel inverter configuration and the modulation method applied, common-mode circulating currents may be excited as a consequence of the differential-mode circulating current control. To prevent the common-mode circulating currents that are caused by the modulation, a dual modulator method is introduced. The dual modulator basically consists of two independently operating modulators, the outputs of which eventually constitute the switching commands of the inverter. The two independently operating modulators are referred to as primary and secondary modulators. In its intended usage, the same voltage vector is fed to the primary modulators of each parallel unit, and the inputs of the secondary modulators are obtained from the circulating current controllers. To ensure that voltage commands obtained from the circulating current controllers are realizable, it must be guaranteed that the inverter is not driven into saturation by the primary modulator. The inverter saturation can be prevented by limiting the inputs of the primary and secondary modulators. Because of this, also a limitation algorithm is proposed. The operation of both the proposed dual modulator and the limitation algorithm is verified experimentally.
Resumo:
The amount of installed wind power has been growing exponentially during the past ten years. As wind turbines have become a significant source of electrical energy, the interactions between the turbines and the electric power network need to be studied more thoroughly than before. Especially, the behavior of the turbines in fault situations is of prime importance; simply disconnecting all wind turbines from the network during a voltage drop is no longer acceptable, since this would contribute to a total network collapse. These requirements have been a contributor to the increased role of simulations in the study and design of the electric drive train of a wind turbine. When planning a wind power investment, the selection of the site and the turbine are crucial for the economic feasibility of the installation. Economic feasibility, on the other hand, is the factor that determines whether or not investment in wind power will continue, contributing to green electricity production and reduction of emissions. In the selection of the installation site and the turbine (siting and site matching), the properties of the electric drive train of the planned turbine have so far been generally not been taken into account. Additionally, although the loss minimization of some of the individual components of the drive train has been studied, the drive train as a whole has received less attention. Furthermore, as a wind turbine will typically operate at a power level lower than the nominal most of the time, efficiency analysis in the nominal operating point is not sufficient. This doctoral dissertation attempts to combine the two aforementioned areas of interest by studying the applicability of time domain simulations in the analysis of the economicfeasibility of a wind turbine. The utilization of a general-purpose time domain simulator, otherwise applied to the study of network interactions and control systems, in the economic analysis of the wind energy conversion system is studied. The main benefits of the simulation-based method over traditional methods based on analytic calculation of losses include the ability to reuse and recombine existing models, the ability to analyze interactions between the components and subsystems in the electric drive train (something which is impossible when considering different subsystems as independent blocks, as is commonly done in theanalytical calculation of efficiencies), the ability to analyze in a rather straightforward manner the effect of selections other than physical components, for example control algorithms, and the ability to verify assumptions of the effects of a particular design change on the efficiency of the whole system. Based on the work, it can be concluded that differences between two configurations can be seen in the economic performance with only minor modifications to the simulation models used in the network interaction and control method study. This eliminates the need ofdeveloping analytic expressions for losses and enables the study of the system as a whole instead of modeling it as series connection of independent blocks with no lossinterdependencies. Three example cases (site matching, component selection, control principle selection) are provided to illustrate the usage of the approach and analyze its performance.
Resumo:
The age-old adage goes that nothing in this world lasts but change, and this generation has indeed seen changes that are unprecedented. Business managers do not have the luxury of going with the flow: they have to plan ahead, to think strategies that will meet the changing conditions, however stormy the weather seems to be. This demand raises the question of whether there is something a manager or planner can do to circumvent the eye of the storm in the future? Intuitively, one can either run on the risk of something happening without preparing, or one can try to prepare oneself. Preparing by planning for each eventuality and contingency would be impractical and prohibitively expensive, so one needs to develop foreknowledge, or foresight past the horizon of the present and the immediate future. The research mission in this study is to support strategic technology management by designing an effective and efficient scenario method to induce foresight to practicing managers. The design science framework guides this study in developing and evaluating the IDEAS method. The IDEAS method is an electronically mediated scenario method that is specifically designed to be an effective and accessible. The design is based on the state-of-the-art in scenario planning, and the product is a technology-based artifact to solve the foresight problem. This study demonstrates the utility, quality and efficacy of the artifact through a multi-method empirical evaluation study, first by experimental testing and secondly through two case studies. The construction of the artifact is rigorously documented as justification knowledge as well as the principles of form and function on the general level, and later through the description and evaluation of instantiations. This design contributes both to practice and foundation of the design. The IDEAS method contributes to the state-of-the-art in scenario planning by offering a light-weight and intuitive scenario method for resource constrained applications. Additionally, the study contributes to the foundations and methods of design by forging a clear design science framework which is followed rigorously. To summarize, the IDEAS method is offered for strategic technology management, with a confident belief that it will enable gaining foresight and aid the users to choose trajectories past the gales of creative destruction and off to a brighter future.
Resumo:
Alumiiniveneissä hitsauksen aiheuttamat muodonmuutokset ovat usein erittäin haitallisia, koska niiden aiheuttamat mittamuutokset ja ulkonäölliset haitat alentavat tuotteen laatua sekä arvoa. Monissa tapauksissa myös hitsausliitoksen suorituskyky heikentyy ja lisäksi hitsausmuodonmuutokset voivat aiheuttaa toiminnallisia ongelmia alumiiniveneiden runkorakenteisiin. Tästä johtuen hitsausmuodonmuutosten hallinta ja minimointi ovat erityisen tärkeitä tekijöitä pyrittäessä parantamaan alumiiniveneiden laatua ja kustannustehokkuutta sekä kasvattamaan alumiinivenealan kilpailukykyä. Tässä diplomityössä tutkittiin robotisoidun kaasukaarihitsauksen aiheuttamia muodonmuutoksia sekä niiden hallintaa alumiinista valmistettujen työ- ja huviveneiden runkorakenteissa. Työssä perehdyttiin nykyaikaiseen alumiinivenevalmistukseen sekä hitsattujen rakenteiden yleisiin lujuusopin teorioihin ja käyttäytymismalleihin. Alumiinin hitsausmuodonmuutosten tutkimuksissa suoritettiin käytännön hitsauskokeita, joiden kohteina olivat alumiiniveneissä käytetyt rakenneratkaisut ja liitostyypit. Työn tavoitteena oli määrittää alumiinin hitsauksessa syntyviin muodonmuutoksiin keskeisesti vaikuttavia tekijöitä ja parametreja. Tutkimustulosten perusteella pyrittiin esittämään ratkaisuja alumiiniveneiden rakenteisiin aiheutuvien hitsausmuodonmuutosten vähentämiseksi ja hallitsemiseksi. Alumiinirakenteissa hitsausmuodonmuutokset ovat hyvin tapauskohtaisia, koska usein niiden syntyminen määräytyy monen tekijän yhteisvaikutuksesta. Teräsrakenteille käytetyt yleiset analyyttiset laskentakaavat ja käyttäytymismallit eivät sovellu suoraan alumiinirakenteille, mikä johtuu alumiinin erilaisista materiaaliominaisuuksista ja käyttäytymisestä hitsauksen aikana. Tulevaisuudessa empiiristen koejärjestelyiden ja analyyttisten mallien lisäksi sovellettavan numeerisen elementtimenetelmän avulla voidaan parantaa alumiinin hitsauksessa aiheutuvien muodonmuutosten kokonaisvaltaista hallintaa.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.