918 resultados para Open-loop speed control
Resumo:
In the plant-beneficial bacterium Pseudomonas fluorescens CHA0, the expression of antifungal exoproducts is controlled by the GacS/GacA two-component system. Two RNA binding proteins (RsmA, RsmE) ensure effective translational repression of exoproduct mRNAs. At high cell population densities, GacA induces three small RNAs (RsmX, RsmY, RsmZ) which sequester both RsmA and RsmE, thereby relieving translational repression. Here we systematically analyse the features that allow the RNA binding proteins to interact strongly with the 5' untranslated leader mRNA of the P. fluorescens hcnA gene (encoding hydrogen cyanide synthase subunit A). We obtained evidence for three major RsmA/RsmE recognition elements in the hcnA leader, based on directed mutagenesis, RsmE footprints and toeprints, and in vivo expression data. Two recognition elements were found in two stem-loop structures whose existence in the 5' leader region was confirmed by lead(II) cleavage analysis. The third recognition element, which overlapped the hcnA Shine-Dalgarno sequence, was postulated to adopt either an open conformation, which would favour ribosome binding, or a stem-loop structure, which may form upon interaction with RsmA/RsmE and would inhibit access of ribosomes. Effective control of hcnA expression by the Gac/Rsm system appears to result from the combination of the three appropriately spaced recognition elements.
Resumo:
The approval in 2004 of bevacizumab (Avastin), a neutralizing monoclonal antibody directed against vascular endothelial growth factor (VEGF) as the first anti-angiogenic systemic drug to treat cancer patients validated the notion introduced 33 years earlier by Dr. Judah Folkman, that inhibition of tumor angiogenesis might be a valid approach to control tumor growth. Anti-angiogenic therapy was greeted in the clinic a major step forward in cancer treatment. At the same time this success recently boosted the field to the quest for new anti-angiogenic targets and drugs. In spite of this success, however, some old questions in the field have remained unanswered and new ones have emerged. They include the identification for surrogate markers of angiogenesis and anti-angiogenesis, the understanding about how anti-angiogenic therapy and chemotherapy synergize, the characterization of the biological consequences of sustained suppression of angiogenesis on tumor biology and normal tissue homeostasis, and the mechanisms of tumor escape from anti-angiogenesis. In this review we summarize some of these outstanding questions, and highlight future challenges in clinical, translational and experimental research in anti-angiogenic therapy that need to be addressed in order to improve current treatments and to design new drugs.
Resumo:
Part 6 of the Manual on Uniform Traffic Control Devices (MUTCD) describes several types of channelizing devices that can be used to warn road users and guide them through work zones; these devices include cones, tubular markers, vertical panels, drums, barricades, and temporary raised islands. On higher speed/volume roadways, drums and/or vertical panels have been popular choices in many states, due to their formidable appearance and the enhanced visibility they provide when compared to standard cones. However, due to their larger size, drums also require more effort and storage space to transport, deploy and retrieve. Recent editions of the MUTCD have introduced new devices for channelizing; specifically of interest for this study is a taller (>36 inches) but thinner cone. While this new device does not offer a comparable target value to that of drums, the new devices are significantly larger than standard cones and they offer improved stability as well. In addition, these devices are more easily deployed and stored than drums and they cost less. Further, for applications previously using both drums and tall cones, the use of tall cones only provides the ability for delivery and setup by a single vehicle. An investigation of the effectiveness of the new channelizing devices provides a reference for states to use in selecting appropriate traffic control for high speed, high volume applications, especially for short term or limited duration exposures. This study includes a synthesis of common practices by state DOTs, as well as daytime and nighttime field observations of driver reactions using video detection equipment. The results of this study are promising for the day and night performance of the new tall cones, comparing favorably to the performance of drums when used for channelizing in tapers. The evaluation showed no statistical difference in merge distance and location, shy distance, or operating speed in either daytime or nighttime conditions. The study should provide a valuable resource for state DOTs to utilize in selecting the most effective channelizing device for use on high speed/high volume roadways where timely merging by drivers is critical to safety and mobility.
Resumo:
Pavement and shoulder edge drop-offs commonly occur in work zones as the result of overlays, pavement replacement, or shoulder construction. The depth of these elevation differentials can vary from approximately one inch when a flexible pavement overlay is applied to several feet where major reconstruction is undertaken. The potential hazards associated with pavement edge differentials depend on several factors including depth of the drop-off, shape of the pavement edge, distance from traveled way, vehicle speed, traffic mix, volume, and other factors. This research was undertaken to review current practices in other states for temporary traffic control strategies addressing lane edge differentials and to analyze crash data and resultant litigation related to edge drop-offs. An objective was to identify cost-effective practices that would minimize the potential for and impacts of edge drop crashes in work zones. Considerable variation in addressing temporary traffic control in work zones with edge drop-off exposure was found among the states surveyed. Crashes related to pavement edge drop-offs in work zones do not commonly occur in the state of Iowa, but some have resulted in significant tort claims and settlements. The use of benefit/cost analysis may provide guidance in selection of an appropriate mitigation and protection of edge drop-off conditions. Development and adoption of guidelines for design of appropriate traffic control for work zones that include edge drop-off exposure, particularly identifying effective use of temporary barrier rail, may be beneficial in Iowa.
Resumo:
The effect of motor training using closed loop controlled Functional Electrical Stimulation (FES) on motor performance was studied in 5 spinal cord injured (SCI) volunteers. The subjects trained 2 to 3 times a week during 2 months on a newly developed rehabilitation robot (MotionMaker?). The FES induced muscle force could be adequately adjusted throughout the programmed exercises by the way of a closed loop control of the stimulation currents. The software of the MotionMaker? allowed spasms to be detected accurately and managed in a way to prevent any harm to the SCI persons. Subjects with incomplete SCI reported an increased proprioceptive awareness for motion and were able to achieve a better voluntary activation of their leg muscles during controlled FES. At the end of the training, the voluntary force of the 4 incomplete SCI patients was found increased by 388% on their most affected leg and by 193% on the other leg. Active mobilisation with controlled FES seems to be effective in improving motor function in SCI persons by increasing the sensory input to neuronal circuits involved in motor control as well as by increasing muscle strength.
Resumo:
Summary This dissertation explores how stakeholder dialogue influences corporate processes, and speculates about the potential of this phenomenon - particularly with actors, like non-governmental organizations (NGOs) and other representatives of civil society, which have received growing attention against a backdrop of increasing globalisation and which have often been cast in an adversarial light by firms - as a source of teaming and a spark for innovation in the firm. The study is set within the context of the introduction of genetically-modified organisms (GMOs) in Europe. Its significance lies in the fact that scientific developments and new technologies are being generated at an unprecedented rate in an era where civil society is becoming more informed, more reflexive, and more active in facilitating or blocking such new developments, which could have the potential to trigger widespread changes in economies, attitudes, and lifestyles, and address global problems like poverty, hunger, climate change, and environmental degradation. In the 1990s, companies using biotechnology to develop and offer novel products began to experience increasing pressure from civil society to disclose information about the risks associated with the use of biotechnology and GMOs, in particular. Although no harmful effects for humans or the environment have been factually demonstrated even to date (2008), this technology remains highly-contested and its introduction in Europe catalysed major companies to invest significant financial and human resources in stakeholder dialogue. A relatively new phenomenon at the time, with little theoretical backing, dialogue was seen to reflect a move towards greater engagement with stakeholders, commonly defined as those "individuals or groups with which. business interacts who have a 'stake', or vested interest in the firm" (Carroll, 1993:22) with whom firms are seen to be inextricably embedded (Andriof & Waddock, 2002). Regarding the organisation of this dissertation, Chapter 1 (Introduction) describes the context of the study, elaborates its significance for academics and business practitioners as an empirical work embedded in a sector at the heart of the debate on corporate social responsibility (CSR). Chapter 2 (Literature Review) traces the roots and evolution of CSR, drawing on Stakeholder Theory, Institutional Theory, Resource Dependence Theory, and Organisational Learning to establish what has already been developed in the literature regarding the stakeholder concept, motivations for engagement with stakeholders, the corporate response to external constituencies, and outcomes for the firm in terms of organisational learning and change. I used this review of the literature to guide my inquiry and to develop the key constructs through which I viewed the empirical data that was gathered. In this respect, concepts related to how the firm views itself (as a victim, follower, leader), how stakeholders are viewed (as a source of pressure and/or threat; as an asset: current and future), corporate responses (in the form of buffering, bridging, boundary redefinition), and types of organisational teaming (single-loop, double-loop, triple-loop) and change (first order, second order, third order) were particularly important in building the key constructs of the conceptual model that emerged from the analysis of the data. Chapter 3 (Methodology) describes the methodology that was used to conduct the study, affirms the appropriateness of the case study method in addressing the research question, and describes the procedures for collecting and analysing the data. Data collection took place in two phases -extending from August 1999 to October 2000, and from May to December 2001, which functioned as `snapshots' in time of the three companies under study. The data was systematically analysed and coded using ATLAS/ti, a qualitative data analysis tool, which enabled me to sort, organise, and reduce the data into a manageable form. Chapter 4 (Data Analysis) contains the three cases that were developed (anonymised as Pioneer, Helvetica, and Viking). Each case is presented in its entirety (constituting a `within case' analysis), followed by a 'cross-case' analysis, backed up by extensive verbatim evidence. Chapter 5 presents the research findings, outlines the study's limitations, describes managerial implications, and offers suggestions for where more research could elaborate the conceptual model developed through this study, as well as suggestions for additional research in areas where managerial implications were outlined. References and Appendices are included at the end. This dissertation results in the construction and description of a conceptual model, grounded in the empirical data and tied to existing literature, which portrays a set of elements and relationships deemed important for understanding the impact of stakeholder engagement for firms in terms of organisational learning and change. This model suggests that corporate perceptions about the nature of stakeholder influence the perceived value of stakeholder contributions. When stakeholders are primarily viewed as a source of pressure or threat, firms tend to adopt a reactive/defensive posture in an effort to manage stakeholders and protect the firm from sources of outside pressure -behaviour consistent with Resource Dependence Theory, which suggests that firms try to get control over extemal threats by focussing on the relevant stakeholders on whom they depend for critical resources, and try to reverse the control potentially exerted by extemal constituencies by trying to influence and manipulate these valuable stakeholders. In situations where stakeholders are viewed as a current strategic asset, firms tend to adopt a proactive/offensive posture in an effort to tap stakeholder contributions and connect the organisation to its environment - behaviour consistent with Institutional Theory, which suggests that firms try to ensure the continuing license to operate by internalising external expectations. In instances where stakeholders are viewed as a source of future value, firms tend to adopt an interactive/innovative posture in an effort to reduce or widen the embedded system and bring stakeholders into systems of innovation and feedback -behaviour consistent with the literature on Organisational Learning, which suggests that firms can learn how to optimize their performance as they develop systems and structures that are more adaptable and responsive to change The conceptual model moreover suggests that the perceived value of stakeholder contribution drives corporate aims for engagement, which can be usefully categorised as dialogue intentions spanning a continuum running from low-level to high-level to very-high level. This study suggests that activities aimed at disarming critical stakeholders (`manipulation') providing guidance and correcting misinformation (`education'), being transparent about corporate activities and policies (`information'), alleviating stakeholder concerns (`placation'), and accessing stakeholder opinion ('consultation') represent low-level dialogue intentions and are experienced by stakeholders as asymmetrical, persuasive, compliance-gaining activities that are not in line with `true' dialogue. This study also finds evidence that activities aimed at redistributing power ('partnership'), involving stakeholders in internal corporate processes (`participation'), and demonstrating corporate responsibility (`stewardship') reflect high-level dialogue intentions. This study additionally finds evidence that building and sustaining high-quality, trusted relationships which can meaningfully influence organisational policies incline a firm towards the type of interactive, proactive processes that underpin the development of sustainable corporate strategies. Dialogue intentions are related to type of corporate response: low-level intentions can lead to buffering strategies; high-level intentions can underpin bridging strategies; very high-level intentions can incline a firm towards boundary redefinition. The nature of corporate response (which encapsulates a firm's posture towards stakeholders, demonstrated by the level of dialogue intention and the firm's strategy for dealing with stakeholders) favours the type of learning and change experienced by the organisation. This study indicates that buffering strategies, where the firm attempts to protect itself against external influences and cant' out its existing strategy, typically lead to single-loop learning, whereby the firm teams how to perform better within its existing paradigm and at most, improves the performance of the established system - an outcome associated with first-order change. Bridging responses, where the firm adapts organisational activities to meet external expectations, typically leads a firm to acquire new behavioural capacities characteristic of double-loop learning, whereby insights and understanding are uncovered that are fundamentally different from existing knowledge and where stakeholders are brought into problem-solving conversations that enable them to influence corporate decision-making to address shortcomings in the system - an outcome associated with second-order change. Boundary redefinition suggests that the firm engages in triple-loop learning, where the firm changes relations with stakeholders in profound ways, considers problems from a whole-system perspective, examining the deep structures that sustain the system, producing innovation to address chronic problems and develop new opportunities - an outcome associated with third-order change. This study supports earlier theoretical and empirical studies {e.g. Weick's (1979, 1985) work on self-enactment; Maitlis & Lawrence's (2007) and Maitlis' (2005) work and Weick et al's (2005) work on sensegiving and sensemaking in organisations; Brickson's (2005, 2007) and Scott & Lane's (2000) work on organisational identity orientation}, which indicate that corporate self-perception is a key underlying factor driving the dynamics of organisational teaming and change. Such theorizing has important implications for managerial practice; namely, that a company which perceives itself as a 'victim' may be highly inclined to view stakeholders as a source of negative influence, and would therefore be potentially unable to benefit from the positive influence of engagement. Such a selfperception can blind the firm from seeing stakeholders in a more positive, contributing light, which suggests that such firms may not be inclined to embrace external sources of innovation and teaming, as they are focussed on protecting the firm against disturbing environmental influences (through buffering), and remain more likely to perform better within an existing paradigm (single-loop teaming). By contrast, a company that perceives itself as a 'leader' may be highly inclined to view stakeholders as a source of positive influence. On the downside, such a firm might have difficulty distinguishing when stakeholder contributions are less pertinent as it is deliberately more open to elements in operating environment (including stakeholders) as potential sources of learning and change, as the firm is oriented towards creating space for fundamental change (through boundary redefinition), opening issues to entirely new ways of thinking and addressing issues from whole-system perspective. A significant implication of this study is that potentially only those companies who see themselves as a leader are ultimately able to tap the innovation potential of stakeholder dialogue.
Resumo:
The Free Open Source Software (FOSS) seem far from the military field but in some cases, some technologies normally used for civilian purposes may have military applications. These products and technologies are called dual-use. Can we manage to combine FOSS and dual-use products? On one hand, we have to admit that this kind of association exists - dual-use software can be FOSS and many examples demonstrate this duality - but on the other hand, dual-use software available under free licenses lead us to ask many questions. For example, the dual-use export control laws aimed at stemming the proliferation of weapons of mass destruction. Dual-use export in United States (ITAR) and Europe (regulation 428/2009) implies as a consequence the prohibition or regulation of software exportation, involving the closing of source code. Therefore, the issues of exported softwares released under free licenses arises. If software are dual-use goods and serve for military purposes, they may represent a danger. By the rights granted to licenses to run, study, redistribute and distribute modified versions of the software, anyone can access the free dual-use software. So, the licenses themselves are not at the origin of the risk, it is actually linked to the facilitated access to source codes. Seen from this point of view, it goes against the dual-use regulation which allows states to control these technologies exportation. For this analysis, we will discuss about various legal questions and draft answers from either licenses or public policies in this respect.
Resumo:
Työn tavoitteena oli tutkia HSR(High Speed Release)-mittausmenetelmän käyttämistä tarralaminaattiprosessin ohjauksessa UPM Raflatacissa. Nykyisin käytössä olevaan LSR(Low Speed Release)-menetelmään verrattuna HSR-menetelmä kuvaa paremmin tarralaminaatin jatkojalostuksessa sekä etiketöinnissä tapahtuvaa irrotustyötä. Lisäksi työssä tutkittiin irrotusnopeuden vaikutusta HSR-arvoon. Työn kirjallisuusosassa perehdyttiin tarralaminaatin rakenteeseensekä valmistusprosessiin. Koska silikonin valinnalla on merkittävä vaikutus tarralaminaatin releasearvoon, kirjallisessa osassa syvennytään tarkastelemaan tarralaminaatin valmistuksessa käytettyjä silikoneja sekänäiden rakennetta. Kirjallisuusosassa on myös käsitelty muita releasetasoon vaikuttavia tekijöitä. Työn kokeellisessa osassa oli tarkoituksena tutkia HSR-mittausmenetelmän käytettävyyttä tarralaminaatin prosessinohjauksessa. Tätä tutkittiin selvittämällä nykyisin käytössä olevan LSR-menetelmän sekä HSR-menetelmän välistä korreloituvuutta. Mikäli näidenvälillä olisi korreloituvuutta, voitaisiin prosessinohjauksessa ajatella siirtyvän HSR-mittaukseen. Korrelaatiota näiden kahden menetelmän välille ei kuitenkaan löydetty. Työssä tutkittiin myös irrotusnopeuden vaikutusta tuotteen HSR-arvoon. Testeihin valittiin useita eri tuotteita useilta tuotantolaitoksilta. Kaikilla näillä tuotteilla releasearvo kasvoi irrotusnopeutta lisättäessä. Lisäksi työssä määritettiin uudet HSR-spesifikaatiot tietyille tuotteille. Kaikille UPM Raflatacin tuotteille on määritetty LSRspesifikaatiot, HSR-spesifikaatiot on asetettu vain tietyille tuotteille.
Resumo:
Studies aiming at the elucidation of the genetic basis of rare monogenic forms of hypertension have identified mutations in genes coding for the epithelial sodium channel ENaC, for the mineralocorticoid receptor, or for enzymes crucial for the synthesis of aldosterone. These genetic studies clearly demonstrate the importance of the regulation of Na(+) absorption in the aldosterone-sensitive distal nephron (ASDN), for the maintenance of the extracellular fluid volume and blood pressure. Recent studies aiming at a better understanding of the cellular and molecular basis of ENaC-mediated Na(+) absorption in the distal part of nephron, have essentially focused on the regulation ENaC activity and on the aldosterone-signaling cascade. ENaC is a constitutively open channel, and factors controlling the number of active channels at the cell surface are likely to have profound effects on Na(+) absorption in the ASDN, and in the amount of Na(+) that is excreted in the final urine. A number of membrane-bound proteases, kinases, have recently been identified that increase ENaC activity at the cell surface in heterologous expressions systems. Ubiquitylation is a general process that regulates the stability of a variety of target proteins that include ENaC. Recently, deubiquitylating enzymes have been shown to increase ENaC activity in heterologous expressions systems. These regulatory mechanisms are likely to be nephron specific, since in vivo studies indicate that the adaptation of the renal excretion of Na(+) in response to Na(+) diet occurs predominantly in the early part (the connecting tubule) of the ASDN. An important work is presently done to determine in vivo the physiological relevance of these cellular and molecular mechanisms in regulation of ENaC activity. The contribution of the protease-dependent ENaC regulation in mediating Na(+) absorption in the ASDN is still not clearly understood. The signaling pathway that involves ubiquitylation of ENaC does not seem to be absolutely required for the aldosterone-mediated control of ENaC. These in vivo physiological studies presently constitute a major challenge for our understanding of the regulation of ENaC to maintain the Na(+) balance.
Resumo:
Arabidopsis thaliana plants fend off insect attack by constitutive and inducible production of toxic metabolites, such as glucosinolates (GSs). A triple mutant lacking MYC2, MYC3, and MYC4, three basic helix-loop-helix transcription factors that are known to additively control jasmonate-related defense responses, was shown to have a highly reduced expression of GS biosynthesis genes. The myc2 myc3 myc4 (myc234) triple mutant was almost completely devoid of GS and was extremely susceptible to the generalist herbivore Spodoptera littoralis. On the contrary, the specialist Pieris brassicae was unaffected by the presence of GS and preferred to feed on wild-type plants. In addition, lack of GS in myc234 drastically modified S. littoralis feeding behavior. Surprisingly, the expression of MYB factors known to regulate GS biosynthesis genes was not altered in myc234, suggesting that MYC2/MYC3/MYC4 are necessary for direct transcriptional activation of GS biosynthesis genes. To support this, chromatin immunoprecipitation analysis showed that MYC2 binds directly to the promoter of several GS biosynthesis genes in vivo. Furthermore, yeast two-hybrid and pull-down experiments indicated that MYC2/MYC3/MYC4 interact directly with GS-related MYBs. This specific MYC-MYB interaction plays a crucial role in the regulation of defense secondary metabolite production and underlines the importance of GS in shaping plant interactions with adapted and nonadapted herbivores.
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
1. Introduction "The one that has compiled ... a database, the collection, securing the validity or presentation of which has required an essential investment, has the sole right to control the content over the whole work or over either a qualitatively or quantitatively substantial part of the work both by means of reproduction and by making them available to the public", Finnish Copyright Act, section 49.1 These are the laconic words that implemented the much-awaited and hotly debated European Community Directive on the legal protection of databases,2 the EDD, into Finnish Copyright legislation in 1998. Now in the year 2005, after more than half a decade of the domestic implementation it is yet uncertain as to the proper meaning and construction of the convoluted qualitative criteria the current legislation employs as a prerequisite for the database protection both in Finland and within the European Union. Further, this opaque Pan-European instrument has the potential of bringing about a number of far-reaching economic and cultural ramifications, which have remained largely uncharted or unobserved. Thus the task of understanding this particular and currently peculiarly European new intellectual property regime is twofold: first, to understand the mechanics and functioning of the EDD and second, to realise the potential and risks inherent in the new legislation in economic, cultural and societal dimensions. 2. Subject-matter of the study: basic issues The first part of the task mentioned above is straightforward: questions such as what is meant by the key concepts triggering the functioning of the EDD such as presentation of independent information, what constitutes an essential investment in acquiring data and when the reproduction of a given database reaches either qualitatively or quantitatively the threshold of substantiality before the right-holder of a database can avail himself of the remedies provided by the statutory framework remain unclear and call for a careful analysis. As for second task, it is already obvious that the practical importance of the legal protection providedby the database right is in the rapid increase. The accelerating transformationof information into digital form is an existing fact, not merely a reflection of a shape of things to come in the future. To take a simple example, the digitisation of a map, traditionally in paper format and protected by copyright, can provide the consumer a markedly easier and faster access to the wanted material and the price can be, depending on the current state of the marketplace, cheaper than that of the traditional form or even free by means of public lending libraries providing access to the information online. This also renders it possible for authors and publishers to make available and sell their products to markedly larger, international markets while the production and distribution costs can be kept at minimum due to the new electronic production, marketing and distributionmechanisms to mention a few. The troublesome side is for authors and publishers the vastly enhanced potential for illegal copying by electronic means, producing numerous virtually identical copies at speed. The fear of illegal copying canlead to stark technical protection that in turn can dampen down the demand for information goods and services and furthermore, efficiently hamper the right of access to the materials available lawfully in electronic form and thus weaken the possibility of access to information, education and the cultural heritage of anation or nations, a condition precedent for a functioning democracy. 3. Particular issues in Digital Economy and Information Networks All what is said above applies a fortiori to the databases. As a result of the ubiquity of the Internet and the pending breakthrough of Mobile Internet, peer-to-peer Networks, Localand Wide Local Area Networks, a rapidly increasing amount of information not protected by traditional copyright, such as various lists, catalogues and tables,3previously protected partially by the old section 49 of the Finnish Copyright act are available free or for consideration in the Internet, and by the same token importantly, numerous databases are collected in order to enable the marketing, tendering and selling products and services in above mentioned networks. Databases and the information embedded therein constitutes a pivotal element in virtually any commercial operation including product and service development, scientific research and education. A poignant but not instantaneously an obvious example of this is a database consisting of physical coordinates of a certain selected group of customers for marketing purposes through cellular phones, laptops and several handheld or vehicle-based devices connected online. These practical needs call for answer to a plethora of questions already outlined above: Has thecollection and securing the validity of this information required an essential input? What qualifies as a quantitatively or qualitatively significant investment? According to the Directive, the database comprises works, information and other independent materials, which are arranged in systematic or methodical way andare individually accessible by electronic or other means. Under what circumstances then, are the materials regarded as arranged in systematic or methodical way? Only when the protected elements of a database are established, the question concerning the scope of protection becomes acute. In digital context, the traditional notions of reproduction and making available to the public of digital materials seem to fit ill or lead into interpretations that are at variance with analogous domain as regards the lawful and illegal uses of information. This may well interfere with or rework the way in which the commercial and other operators have to establish themselves and function in the existing value networks of information products and services. 4. International sphere After the expiry of the implementation period for the European Community Directive on legal protection of databases, the goals of the Directive must have been consolidated into the domestic legislations of the current twenty-five Member States within the European Union. On one hand, these fundamental questions readily imply that the problemsrelated to correct construction of the Directive underlying the domestic legislation transpire the national boundaries. On the other hand, the disputes arisingon account of the implementation and interpretation of the Directive on the European level attract significance domestically. Consequently, the guidelines on correct interpretation of the Directive importing the practical, business-oriented solutions may well have application on European level. This underlines the exigency for a thorough analysis on the implications of the meaning and potential scope of Database protection in Finland and the European Union. This position hasto be contrasted with the larger, international sphere, which in early 2005 does differ markedly from European Union stance, directly having a negative effect on international trade particularly in digital content. A particular case in point is the USA, a database producer primus inter pares, not at least yet having aSui Generis database regime or its kin, while both the political and academic discourse on the matter abounds. 5. The objectives of the study The above mentioned background with its several open issues calls for the detailed study of thefollowing questions: -What is a database-at-law and when is a database protected by intellectual property rights, particularly by the European database regime?What is the international situation? -How is a database protected and what is its relation with other intellectual property regimes, particularly in the Digital context? -The opportunities and threats provided by current protection to creators, users and the society as a whole, including the commercial and cultural implications? -The difficult question on relation of the Database protection and protection of factual information as such. 6. Dsiposition The Study, in purporting to analyse and cast light on the questions above, is divided into three mainparts. The first part has the purpose of introducing the political and rationalbackground and subsequent legislative evolution path of the European database protection, reflected against the international backdrop on the issue. An introduction to databases, originally a vehicle of modern computing and information andcommunication technology, is also incorporated. The second part sets out the chosen and existing two-tier model of the database protection, reviewing both itscopyright and Sui Generis right facets in detail together with the emergent application of the machinery in real-life societal and particularly commercial context. Furthermore, a general outline of copyright, relevant in context of copyright databases is provided. For purposes of further comparison, a chapter on the precursor of Sui Generi, database right, the Nordic catalogue rule also ensues. The third and final part analyses the positive and negative impact of the database protection system and attempts to scrutinize the implications further in the future with some caveats and tentative recommendations, in particular as regards the convoluted issue concerning the IPR protection of information per se, a new tenet in the domain of copyright and related rights.
Resumo:
High dynamic performance of an electric motor is a fundamental prerequisite in motion control applications, also known as servo drives. Recent developments in the field of microprocessors and power electronics have enabled faster and faster movements with an electric motor. In such a dynamically demanding application, the dimensioning of the motor differs substantially from the industrial motor design, where feasible characteristics of the motor are for example high efficiency, a high power factor, and a low price. In motion control instead, such characteristics as high overloading capability, high-speed operation, high torque density and low inertia are required. The thesis investigates how the dimensioning of a high-performance servomotor differs from the dimensioning of industrial motors. The two most common servomotor types are examined; an induction motor and apermanent magnet synchronous motor. The suitability of these two motor types indynamically demanding servo applications is assessed, and the design aspects that optimize the servo characteristics of the motors are analyzed. Operating characteristics of a high performance motor are studied, and some methods for improvements are suggested. The main focus is on the induction machine, which is frequently compared to the permanent magnet synchronous motor. A 4 kW prototype induction motor was designed and manufactured for the verification of the simulation results in the laboratory conditions. Also a dynamic simulation model for estimating the thermal behaviour of the induction motor in servo applications was constructed. The accuracy of the model was improved by coupling it with the electromagnetic motor model in order to take into account the variations in the motor electromagnetic characteristics due to the temperature rise.
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.