944 resultados para asynchronous circuits and systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precision Viticulture (PV) is a concept that is beginning to have an impact on the wine-growing sector. Its practical implementation is dependant on various technological developments: crop sensors and yield monitors, local and remote sensors, Global Positioning Systems (GPS), VRA (Variable-Rate Application) equipment and machinery, Geographic Information Systems (GIS) and systems for data analysis and interpretation. This paper reviews a number of research lines related to PV. These areas of research have focused on four very specific fields: 1) quantification and evaluation of within-field variability, 2) delineation of zones of differential treatment at parcel level, based on the analysis and interpretation of this variability, 3) development of Variable-Rate Technologies (VRT) and, finally, 4) evaluation of the opportunities for site-specific vineyard management. Research in these fields should allow winegrowers and enologists to know and understand why yield variability exists within the same parcel, what the causes of this variability are, how the yield and its quality are interrelated and, if spatial variability exists, whether site-specific vineyard management is justifiable on a technical and economic basis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimization models in metabolic engineering and systems biology focus typically on optimizing a unique criterion, usually the synthesis rate of a metabolite of interest or the rate of growth. Connectivity and non-linear regulatory effects, however, make it necessary to consider multiple objectives in order to identify useful strategies that balance out different metabolic issues. This is a fundamental aspect, as optimization of maximum yield in a given condition may involve unrealistic values in other key processes. Due to the difficulties associated with detailed non-linear models, analysis using stoichiometric descriptions and linear optimization methods have become rather popular in systems biology. However, despite being useful, these approaches fail in capturing the intrinsic nonlinear nature of the underlying metabolic systems and the regulatory signals involved. Targeting more complex biological systems requires the application of global optimization methods to non-linear representations. In this work we address the multi-objective global optimization of metabolic networks that are described by a special class of models based on the power-law formalism: the generalized mass action (GMA) representation. Our goal is to develop global optimization methods capable of efficiently dealing with several biological criteria simultaneously. In order to overcome the numerical difficulties of dealing with multiple criteria in the optimization, we propose a heuristic approach based on the epsilon constraint method that reduces the computational burden of generating a set of Pareto optimal alternatives, each achieving a unique combination of objectives values. To facilitate the post-optimal analysis of these solutions and narrow down their number prior to being tested in the laboratory, we explore the use of Pareto filters that identify the preferred subset of enzymatic profiles. We demonstrate the usefulness of our approach by means of a case study that optimizes the ethanol production in the fermentation of Saccharomyces cerevisiae.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade defeasible argumentation frameworks have evolved to become a sound setting to formalize commonsense, qualitative reasoning. The logic programming paradigm has shown to be particularly useful for developing different argument-based frameworks on the basis of different variants of logic programming which incorporate defeasible rules. Most of such frameworks, however, are unable to deal with explicit uncertainty, nor with vague knowledge, as defeasibility is directly encoded in the object language. This paper presents Possibilistic Logic Programming (P-DeLP), a new logic programming language which combines features from argumentation theory and logic programming, incorporating as well the treatment of possibilistic uncertainty. Such features are formalized on the basis of PGL, a possibilistic logic based on G¨odel fuzzy logic. One of the applications of P-DeLP is providing an intelligent agent with non-monotonic, argumentative inference capabilities. In this paper we also provide a better understanding of such capabilities by defining two non-monotonic operators which model the expansion of a given program P by adding new weighed facts associated with argument conclusions and warranted literals, respectively. Different logical properties for the proposed operators are studied

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityössä tutkittiin kaupallisen monikappaledynamiikkaohjelmiston soveltuvuutta kiinnirullaimen dynamiikan ja värähtelyjen tutkimiseen. Erityisen kiinnostuneita oltiin nipin kuvauksesta sekä nipissä tapahtuvista värähtelyistä. Tässä diplomityössä mallinnettiin kiinnirullaimen ensiö- ja toisiokäytöt sekä tampuuritela. Malli yhdistettiin myöhemmin Metso Paper Järvenpäässä rinnakkaisena diplomityönä tehtyyn malliin, joista muodostui kahteen ratkaisijaan perustuva simulointimalli. Simulointimalli rakennettiin käyttämään kahta erillistä ratkaisijaa, joista toinen on mekaniikkamallin rakentamisessa käytetty ADAMS-ohjelmisto ja toinen säätöjärjestelmää ja hydraulipiirejä kuvaava Simulink-malli. Nipin mallintamiseksi tampuuritela ja rullaussylinteri mallinnettiin joustaviksi käyttäen keskitettyjen massojen menetelmää. Siirtolaitteissa sekä runkorakenteissa tapahtuvat joustot kuvattiin yhden vapausasteen jousi-vaimennin voimilla kuvattuina järjestelminä. Tässä diplomityössä on myös keskitytty esittelemään ADAMS-ohjelmiston toimintaa ohjeistavasti sekä käsittelemään parametrisen mallintamisen etuja. Työssä havaittiin monikappaledynamiikan soveltuvuus kiinnirullaimen dynamiikan sekä dynaamisten voimien aiheuttamien värähtelyjen tutkimiseen. Suoritetuista värähtelymittauksista voitiin tehdä vain arvioita. Mallin havaittiin vaativan lisätutkimusta ja kehitystyötä

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neurons and astrocytes, the two major cell populations in the adult brain, are characterized by their own mode of intercellular communication--the synapses and the gap junctions (GJ), respectively. In addition, there is increasing evidence for dynamic and metabolic neuroglial interactions resulting in the modulation of synaptic transmission at the so-called "tripartite synapse". Based on this, we have investigated at the ultrastructural level how excitatory synapses (ES) and astroglial GJ are spatially distributed in layer IV of the barrel cortex of the adult mouse. We used specific antibodies for connexin (Cx) 30 and 43 to identify astroglial GJ, these two proteins are known to be present in the majority of astroglial GJ in the cerebral cortex. In electron-microscopic images, we measured the distance between two ES, between two GJ and between a GJ and its nearest ES. We found a ratio of two GJ per three ES in the hollow and septal areas. Taking into account the size of an astrocyte domain, the high density of GJ suggests the occurrence of reflexive type, i.e. GJ between processes of the same astrocyte. Interestingly, the distance between an ES and an astroglial GJ was found to be significantly lower than that between either two synapses or between two GJ. These observations indicate that the two modes of cell-to-cell communication are not randomly distributed in layer IV of the barrel cortex. Consequently, this feature may provide the morphological support for the recently reported functional interactions between neuronal circuits and astroglial networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For decades, lung cancer has been the most common cancer in terms of both incidence and mortality. There has been very little improvement in the prognosis of lung cancer. Early treatment following early diagnosis is considered to have potential for development. The National Lung Screening Trial (NLST), a large, well-designed randomized controlled trial, evaluated low-dose computed tomography (LDCT) as a screening tool for lung cancer. Compared with chest X-ray, annual LDCT screening reduced death from lung cancer and overall mortality by 20 and 6.7 %, respectively, in high-risk people aged 55-74 years. Several smaller trials of LDCT screening are under way, but none are sufficiently powered to detect a 20 % reduction in lung cancer death. Thus, it is very unlikely that the NLST results will be replicated. In addition, the NLST raises several issues related to screening, such as the high false-positive rate, overdiagnosis and cost. Healthcare providers and systems are now left with the question of whether the available findings should be translated into practice. We present the main reasons for implementing lung cancer screening in high-risk adults and discuss the main issues related to lung cancer screening. We stress the importance of eligibility criteria, smoking cessation programs, primary care physicians, and informed-decision making should lung cancer screening be implemented. Seven years ago, we were waiting for the results of trials. Such evidence is now available. Similar to almost all other cancer screens, uncertainties exist and persist even after recent scientific efforts and data. We believe that by staying within the characteristics of the original trial and appropriately sharing the evidence as well as the uncertainties, it is reasonable to implement a LDCT lung cancer screening program for smokers and former smokers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AbstractTuberculosis is a disease whose incidence has increased principally as a consequence of HIV infection and use of immunosuppressive drugs. The abdomen is the most common site of extrapulmonary tuberculosis. It may be confused with several different conditions such as inflammatory bowel disease, cancer and other infectious diseases. Delay in the diagnosis may result in significantly increased morbidity, and therefore an early recognition of the condition is essential for proper treatment. In the present essay, cases with confirmed diagnosis of abdominal tuberculosis were assessed by means of computed tomography and magnetic resonance imaging, demonstrating the involvement of different organs and systems, and presentations which frequently lead radiologists to a diagnostic dilemma. A brief literature review was focused on imaging findings and their respective prevalence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This master’s thesis is focused on optimizing the parameters of a distribution transformer with respect to low voltage direct current (LVDC) distribution system. One of the main parts of low voltage direct current (LVDC) distribution system is transformer. It is studied from several viewpoints like filtering capabilities of harmonics caused by rectifier, losses and short circuit current limiting Determining available short circuit currents is one of the most important aspects of designing power distribution systems. Short circuits and their effects must be considered in selecting electrical equipment, circuit protection and other devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asian rust of soybean [Glycine max (L.) Merril] is one of the most important fungal diseases of this crop worldwide. The recent introduction of Phakopsora pachyrhizi Syd. & P. Syd in the Americas represents a major threat to soybean production in the main growing regions, and significant losses have already been reported. P. pachyrhizi is extremely aggressive under favorable weather conditions, causing rapid plant defoliation. Epidemiological studies, under both controlled and natural environmental conditions, have been done for several decades with the aim of elucidating factors that affect the disease cycle as a basis for disease modeling. The recent spread of Asian soybean rust to major production regions in the world has promoted new development, testing and application of mathematical models to assess the risk and predict the disease. These efforts have included the integration of new data, epidemiological knowledge, statistical methods, and advances in computer simulation to develop models and systems with different spatial and temporal scales, objectives and audience. In this review, we present a comprehensive discussion on the models and systems that have been tested to predict and assess the risk of Asian soybean rust. Limitations, uncertainties and challenges for modelers are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cleaner technologies include products, services, technologies, processes and systems that in use create less environmental hazard than the existing alternatives. Rapidly growing cleantech sector possesses an essential competitive advantage in the future. However, no profound research has been conducted on the characteristics of cleaner technologies and their effect on the commercialization process. This thesis aims at synthesizing scattered information and creating a basis for accelerating cleaner technology commercialization in Finnish context. Two research questions are defined: 1. What are the key challenges and success factors in the commercialization of cleaner technologies based on the existing literature? 2. What kind of lessons can be learned from the Finnish success stories of cleantech commercialization? The research was conducted as a literature review and supported with three case interviews. The results suggest that literature-based challenges are mostly related to, for example, difficulty in gathering customer information, unrealistic customer expectations, lack of resources, networks and proper success indicators, legislation, and unstructured strategy planning stemming from company culture. Handling the barriers require, above all, open communication from all stakeholders, management commitment and accurate goal setting, government-driven funding and incentives, and cooperation with educational facilities. Finnish success cases emphasize especially customer attention: listening to customers and receiving feedback from them during the whole commercialization process to correct the errors early and save resources, visionary in fulfilling customer needs, ability to question company’s own business performance, not being afraid of making mistakes but learning from them, and continuously observing and evaluating the commercialization process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An Autonomous Mobile Robot battery driven, with two traction wheels and a steering wheel is being developed. This Robot central control is regulated by an IPC, which controls every function of security, steering, positioning localization and driving. Each traction wheel is operated by a DC motor with independent control system. This system is made up of a chopper, an encoder and a microcomputer. The IPC transmits the velocity values and acceleration ramp references to the PIC microcontrollers. As each traction wheel control is independent, it's possible to obtain different speed values for each wheel. This process facilities the direction and drive changes. Two different strategies for speed velocity control were implemented; one works with PID, and the other with fuzzy logic. There were no changes in circuits and feedback control, except for the PIC microcontroller software. Comparing the two different speed control strategies the results were equivalent. However, in relation to the development and implementation of these strategies, the difficulties were bigger to implement the PID control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concerns about the sustainability of large-scale, direct-drilled RR-soybeans (Glycine max), and RR-maize (Zea mays) under monoculture in central Argentina are growing steadily. An experiment was conducted during three consecutive years to determine the effects of crops and systems (monocultures and strips) and herbicide strategy on weed density, population rate of change (l), b community diversity (H´), crop yields and Land Equivalent Ratio (LER). Not only crops but also crop systems differentially influenced weed densities along their growth and development. For crop harvests, weed densities increased in both maize crop systems as compared to in the one for soybeans, but the lowest increase occurred in soybean strips. Differences were leveled by both herbicide strategies, which achieved 73% efficacy during the critical periods in both crops. l of annual monocotyledonous increased, thus shifting the weed community composition. Species richness and H´ were not affected by crop systems, but both herbicide strategies, particularly POST, either in soybeans in monoculture or in maize strips, significantly enhanced H´. Crop yields significantly increased in the maize-strip system with POST (Year 1) or PRE (Years 2 and 3) strategies, thus increasing LER above 1. Herbicide Environmental Load treatments fall within very low or low field use rating.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, one of the biggest challenges faced by organic no-tillage farming is weed control. Thus, the use of cropping practices that help in the control of weeds is extremely important. The objective of this study was to evaluate population density and level of weed infestation in an organic no-tillage corn cropping system under different soil covers. The experiment was conducted in a randomized block design with six repetitions and five treatments, consisting of three soil covers in an organic no-tillage system, and an organic and a conventional system, both without soil cover. The treatments with soil cover used a grass species represented by the black oat, a leguminous species represented by the white lupine, and intercropping between both species. Corn was sown with spacing of 1.0 m between rows and 0.20 m between plants, using the commercial hybrid AG 1051. Infestation in corn was evaluated at stages V5 and V10, and weed density was evaluated at stage V5. The use of black oat straw alone or intercropped with white lupine, in the organic no-tillage corn cropping system, reduced the percentage of weed infestation and absolute weed density. Management-intensive systems and systems without soil cover showed higher relative densities for species Oxalis spp., Galinsoga quadriradiata and Stachys arvensis. The species Cyperus rotundus showed the highest relative density on organic no-tillage corn cropping systems. Black oat straw in the organic no-tillage cropping system limited the productive potential of corn.