996 resultados para Large Amplitude Motions
Resumo:
This paper presents a study on the dynamics of the rattling problem in gearboxes under non-ideal excitation. The subject has being analyzed by a number of authors such as Karagiannis and Pfeiffer (1991), for the ideal excitation case. An interesting model of the same problem by Moon (1992) has been recently used by Souza and Caldas (1999) to detect chaotic behavior. We consider two spur gears with different diameters and gaps between the teeth. Suppose the motion of one gear to be given while the motion of the other is governed by its dynamics. In the ideal case, the driving wheel is supposed to undergo a sinusoidal motion with given constant amplitude and frequency. In this paper, we consider the motion to be a function of the system response and a limited energy source is adopted. Thus an extra degree of freedom is introduced in the problem. The equations of motion are obtained via a Lagrangian approach with some assumed characteristic torque curves. Next, extensive numerical integration is used to detect some interesting geometrical aspects of regular and irregular motions of the system response.
Resumo:
This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.
Resumo:
Com o objetivo de avaliar o efeito dos sistemas de plantio direto e convencional e estratégias de manejo de plantas daninhas na variação da temperatura do solo e na produtividade de pimentão, realizou-se, na Universidade Federal do Semiárido, em Mossoró-RN, um experimento no esquema de parcelas subdivididas, distribuídas no delineamento experimental em blocos casualizados com quatro repetições. Os sistemas de plantio direto e convencional do pimentão foram avaliados nas parcelas; nas subparcelas, avaliaram-se três estratégias de manejo de plantas daninhas (cobertura do solo com filme de polietileno preto, com capinas e sem capinas). Em cada subparcela foram instalados sensores tipo termopares, a 5 cm de profundidade, para medir a temperatura do solo. A partir dos dados obtidos, determinou-se a variação da temperatura ao longo do dia, no período de 20 a 30 dias após o plantio das mudas de pimentão, e a cada cinco dias, a média das temperaturas máxima e mínima e a amplitude térmica diária. Aos 60 e 147 dias após o transplante das mudas de pimentão, foram feitas avaliações da densidade e massa seca de plantas daninhas nos tratamentos sem capinas. Os tratamentos com filme de polietileno e capinado no sistema de plantio convencional apresentaram elevação na temperatura máxima diária do solo em 6,7 e 5,0 ºC, respectivamente, em relação ao sistema de plantio direto com capinas. A amplitude térmica no sistema de plantio convencional nos tratamentos com capinas regulares e com filme de polietileno foi de 11 ºC, ao passo que no sistema de plantio direto a amplitude foi de 6,3, 4,5 e 4,0 ºC nos tratamentos com capinas, sem capinas e com filme de polietileno, respectivamente. A interferência das plantas daninhas nos tratamentos sem capinas resultou em redução de 94,95 e 92,10% da produtividade comercial nos sistemas de plantio direto e convencional, respectivamente.
Resumo:
Adapting and scaling up agile concepts, which are characterized by iterative, self-directed, customer value focused methods, may not be a simple endeavor. This thesis concentrates on studying challenges in a large-scale agile software development transformation in order to enhance understanding and bring insight into the underlying factors for such emerging challenges. This topic is approached through understanding the concepts of agility and different methods compared to traditional plan-driven processes, complex adaptive theory and the impact of organizational culture on agile transformational efforts. The empirical part was conducted by a qualitative case study approach. The internationally operating software development case organization had a year of experience of an agile transformation effort during it had also undergone organizational realignment efforts. The primary data collection was conducted through semi-structured interviews supported by participatory observation. As a result the identified challenges were categorized under four broad themes: organizational, management, team dynamics and process related. The identified challenges indicate that agility is a multifaceted concept. Agile practices may bring visibility in issues of which many are embedded in the organizational culture or in the management style. Viewing software development as a complex adaptive system could facilitate understanding of the underpinning philosophy and eventually solving the issues: interactions are more important than processes and solving a complex problem, such a novel software development, requires constant feedback and adaptation to changing requirements. Furthermore, an agile implementation seems to be unique in nature, and agents engaged in the interaction are the pivotal part of the success of achieving agility. In case agility is not a strategic choice for whole organization, it seems additional issues may arise due to different ways of working in different parts of an organization. Lastly, detailed suggestions to mitigate the challenges of the case organization are provided.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Scanning optics create different types of phenomena and limitation to cladding process compared to cladding with static optics. This work concentrates on identifying and explaining the special features of laser cladding with scanning optics. Scanner optics changes cladding process energy input mechanics. Laser energy is introduced into the process through a relatively small laser spot which moves rapidly back and forth, distributing the energy to a relatively large area. The moving laser spot was noticed to cause dynamic movement in the melt pool. Due to different energy input mechanism scanner optic can make cladding process unstable if parameter selection is not done carefully. Especially laser beam intensity and scanning frequency have significant role in the process stability. The laser beam scanning frequency determines how long the laser beam affects with specific place local specific energy input. It was determined that if the scanning frequency in too low, under 40 Hz, scanned beam can start to vaporize material. The intensity in turn determines on how large package this energy is brought and if the intensity of the laser beam was too high, over 191 kW/cm2, laser beam started to vaporize material. If there was vapor formation noticed in the melt pool, the process starts to resample more laser alloying due to deep penetration of laser beam in to the substrate. Scanner optics enables more flexibility to the process than static optics. The numerical adjustment of scanning amplitude enables clad bead width adjustment. In turn scanner power modulation (where laser power is adjusted according to where the scanner is pointing) enables modification of clad bead cross-section geometry when laser power can be adjusted locally and thus affect how much laser beam melts material in each sector. Power modulation is also an important factor in terms of process stability. When a linear scanner is used, oscillating the scanning mirror causes a dwell time in scanning amplitude border area, where the scanning mirror changes the direction of movement. This can cause excessive energy input to this area which in turn can cause vaporization and process instability. This process instability can be avoided by decreasing energy in this region by power modulation. Powder feeding parameters have a significant role in terms of process stability. It was determined that with certain powder feeding parameter combinations powder cloud behavior became unstable, due to the vaporizing powder material in powder cloud. Mainly this was noticed, when either or both the scanning frequency or powder feeding gas flow was low or steep powder feeding angle was used. When powder material vaporization occurred, it created vapor flow, which prevented powder material to reach the melt pool and thus dilution increased. Also powder material vaporization was noticed to produce emission of light at wavelength range of visible light. This emission intensity was noticed to be correlated with the amount of vaporization in the powder cloud.
Resumo:
Context: BL Lacs are the most numerous extragalactic objects which are detected in Very High Energy (VHE) gamma-rays band. They are a subclass of blazars. Large flux variability amplitude, sometimes happens in very short time scale, is a common characteristic of them. Significant optical polarization is another main characteristics of BL Lacs. BL Lacs' spectra have a continuous and featureless Spectral Energy Distribution (SED) which have two peaks. Among 1442 BL Lacs in the Roma-BZB catalogue, only 51 are detected in VHE gamma-rays band. BL Lacs are most numerous (more than 50% of 514 objects) objects among the sources that are detected above 10 GeV by FERMI-LAT. Therefore, many BL Lacs are expected to be discovered in VHE gamma-rays band. However, due to the limitation on current and near future technology of Imaging Air Cherenkov Telescope, astronomers are forced to predict whether an object emits VHE gamma-rays or not. Some VHE gamma-ray prediction methods are already introduced but still are not confirmed. Cross band correlations are the building blocks of introducing VHE gamma-rays prediction method. Aims: We will attempt to investigate cross band correlations between flux energy density, luminosity and spectral index of the sample. Also, we will check whether recently discovered MAGIC J2001+435 is a typical BL Lac. Methods: We select a sample of 42 TeV BL Lacs and collect 20 of their properties within five energy bands from literature and Tuorla blazar monitoring program database. All of the data are synchronized to be comparable to each other. Finally, we choose 55 pair of datasets for cross band correlations finding and investigating whether there is any correlation between each pair. For MAGIC J2001+435 we analyze the publicly available SWIFT-XRT data, and use the still unpublished VHE gamma-rays data from MAGIC collaboration. The results are compared to the other sources of the sample. Results: Low state luminosity of multiple detected VHE gamma-rays is strongly correlated luminosities in all other bands. However, the high state does not show such strong correlations. VHE gamma-rays single detected sources have similar behaviour to the low state of multiple detected ones. Finally, MAGIC J2001+435 is a typical TeV BL Lac. However, for some of the properties this source is located at the edge of the whole sample (e.g. in terms of X-rays flux). Keywords: BL Lac(s), Population study, Correlations finding, Multi wavelengths analysis, VHE gamma-rays, gamma-rays, X-rays, Optical, Radio
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.
Resumo:
A simple and inexpensive shaker/Erlenmeyer flask system for large-scale cultivation of insect cells is described and compared to a commercial spinner system. On the basis of maximum cell density, average population doubling time and overproduction of recombinant protein, a better result was obtained with a simpler and less expensive bioreactor consisting of Erlenmeyer flasks and an ordinary shaker waterbath. Routinely, about 90 mg of pure poly(ADP-ribose) polymerase catalytic domain was obtained for a total of 3 x 109 infected cells in three liters of culture
Resumo:
Microsoft System Center Configuration Manager is a systems management product for managing large groups of computers and/or mobile devices. It provides operating system deployment, software distribution, patch management, hardware & software inventory, remote control and many other features for the managed clients. This thesis focuses on researching whether this product is suitable for large, international organization with no previous, centralized solution for managing all such networked devices and detecting areas, where the system can be altered to achieve a more optimal management product from the company’s perspective. The results showed that the system is suitable for such organization if properly configured and clear and transparent line of communication between key IT personnel exists.
Resumo:
Infarct-induced heart failure is usually associated with cardiac hypertrophy and decreased ß-adrenergic responsiveness. However, conflicting results have been reported concerning the density of L-type calcium current (I Ca(L)), and the mechanisms underlying the decreased ß-adrenergic inotropic response. We determined I Ca(L) density, cytoplasmic calcium ([Ca2+]i) transients, and the effects of ß-adrenergic stimulation (isoproterenol) in a model of postinfarction heart failure in rats. Left ventricular myocytes were obtained by enzymatic digestion 8-10 weeks after infarction. Electrophysiological recordings were obtained using the patch-clamp technique. [Ca2+]i transients were investigated via fura-2 fluorescence. ß-Adrenergic receptor density was determined by [³H]-dihydroalprenolol binding to left ventricle homogenates. Postinfarction myocytes showed a significant 25% reduction in mean I Ca(L) density (5.7 ± 0.28 vs 7.6 ± 0.32 pA/pF) and a 19% reduction in mean peak [Ca2+]i transients (0.13 ± 0.007 vs 0.16 ± 0.009) compared to sham myocytes. The isoproterenol-stimulated increase in I Ca(L) was significantly smaller in postinfarction myocytes (Emax: 63.6 ± 4.3 vs 123.3 ± 0.9% in sham myocytes), but EC50 was not altered. The isoproterenol-stimulated peak amplitude of [Ca2+]i transients was also blunted in postinfarction myocytes. Adenylate cyclase activation through forskolin produced similar I Ca(L) increases in both groups. ß-Adrenergic receptor density was significantly reduced in homogenates from infarcted hearts (Bmax: 93.89 ± 20.22 vs 271.5 ± 31.43 fmol/mg protein in sham myocytes), while Kd values were similar. We conclude that postinfarction myocytes from large infarcts display reduced I Ca(L) density and peak [Ca2+]i transients. The response to ß-adrenergic stimulation was also reduced and was probably related to ß-adrenergic receptor down-regulation and not to changes in adenylate cyclase activity.
Resumo:
The main objective of the study is to evaluate the impact of Lean Innovation management philosophy on the creativity potential of the large multinational enterprise. A theory of Lean Innovation indicates that the modern company in any industry can successfully combine both waste-decreasing approach and innovative potential promotion through creativity cultivation or, at least, preservation. The theoretical part of the work covers the main factors, pros and cons of Lean thinking and Innovation management separately, along with generalized new product development overview. While the modern international market becomes more accessible for entrepreneural initiatives, small enterprises and start-ups, large international corporations are more subject to adopt the Lean Innovation approach in both operational and product development sectors due to extended resources and capabilities. Moreover, a multinational enterprise is a highly probable pioneer in Lean innovation implementation. The empirical part of the thesis refers to a case of large European enterprise, operating in many markets around the globe, that currently undergoes innovation management adjustments and implementations in product development while already have related themselves with operational process optimization process through Lean thinking. A goal of the work is to understand what kind of difficulties and consequences a large international firm faces when dealing with Lean Innovation to improve own performance, if they can be sealed for generalized approach.
Resumo:
Patients with diffuse large B-cell lymphoma treated in a University Hospital were studied from 1990 to 2001. Two treatment regimens were used: ProMACE-CytaBOM and then, from November 1996 on, the CHOP regimen. Complete remission (CR), disease-free survival (DFS), and overall survival (OS) rates were determined. Primary refractory patients and relapsed patients were also assessed. A total of 111 patients under 60 years of age were assessed and ranked according to the international prognostic index adjusted to age. Twenty (18%) of them were classified as low risk, 40 (36%) as intermediate risk, 33 (29.7%) as high intermediate risk, and 18 (16.3%) as high risk. Over a five-year period, OS and DFS rates were 71 and 59%, respectively, for all patients. For the same time period, OS and DFS rates were 72.8 and 61.3%, respectively, for 77 patients treated with CHOP chemotherapy and 71.3 and 60% for patients treated with the ProMACE-CytaBOM protocol. There was no significant difference in OS or DFS between the two groups. Eleven of 50 refractory and relapsed patients were consolidated with high doses of chemotherapy. Three received allogenic and 8 autologous bone marrow transplantation. For the latter, CR was 62.5% and mean OS was 41.1 months. The clinical behavior, CR, DFS, and OS of the present patients were similar to those reported in the literature. We conclude that both the CHOP and ProMACE-CytaBOM protocols can be used to treat diffuse large B-cell lymphoma patients, although the CHOP protocol is preferable because of its lower cost and lower toxicity.