942 resultados para Free Boundary Value Problem
Resumo:
BACKGROUND: Biliary tract cancer is an uncommon cancer with a poor outcome. We assembled data from the National Cancer Research Institute (UK) ABC-02 study and 10 international studies to determine prognostic outcome characteristics for patients with advanced disease. METHODS: Multivariable analyses of the final dataset from the ABC-02 study were carried out. All variables were simultaneously included in a Cox proportional hazards model, and backward elimination was used to produce the final model (using a significance level of 10%), in which the selected variables were associated independently with outcome. This score was validated externally by receiver operating curve (ROC) analysis using the independent international dataset. RESULTS: A total of 410 patients were included from the ABC-02 study and 753 from the international dataset. An overall survival (OS) and progression-free survival (PFS) Cox model was derived from the ABC-02 study. White blood cells, haemoglobin, disease status, bilirubin, neutrophils, gender, and performance status were considered prognostic for survival (all with P < 0.10). Patients with metastatic disease {hazard ratio (HR) 1.56 [95% confidence interval (CI) 1.20-2.02]} and Eastern Cooperative Oncology Group performance status (ECOG PS) 2 had worse survival [HR 2.24 (95% CI 1.53-3.28)]. In a dataset restricted to patients who received cisplatin and gemcitabine with ECOG PS 0 and 1, only haemoglobin, disease status, bilirubin, and neutrophils were associated with PFS and OS. ROC analysis suggested the models generated from the ABC-02 study had a limited prognostic value [6-month PFS: area under the curve (AUC) 62% (95% CI 57-68); 1-year OS: AUC 64% (95% CI 58-69)]. CONCLUSION: These data propose a set of prognostic criteria for outcome in advanced biliary tract cancer derived from the ABC-02 study that are validated in an international dataset. Although these findings establish the benchmark for the prognostic evaluation of patients with ABC and confirm the value of longheld clinical observations, the ability of the model to correctly predict prognosis is limited and needs to be improved through identification of additional clinical and molecular markers.
Resumo:
BACKGROUND: Most peripheral T-cell lymphoma (PTCL) patients have a poor outcome and the identification of prognostic factors at diagnosis is needed. PATIENTS AND METHODS: The prognostic impact of total metabolic tumor volume (TMTV0), measured on baseline [(18)F]2-fluoro-2-deoxy-d-glucose positron emission tomography/computed tomography, was evaluated in a retrospective study including 108 PTCL patients (27 PTCL not otherwise specified, 43 angioimmunoblastic T-cell lymphomas and 38 anaplastic large-cell lymphomas). All received anthracycline-based chemotherapy. TMTV0 was computed with the 41% maximum standardized uptake value threshold method and an optimal cut-off point for binary outcomes was determined and compared with others prognostic factors. RESULTS: With a median follow-up of 23 months, 2-year progression-free survival (PFS) was 49% and 2-year overall survival (OS) was 67%. High TMTV0 was significantly associated with a worse prognosis. At 2 years, PFS was 26% in patients with a high TMTV0 (>230 cm(3), n = 53) versus 71% for those with a low TMTV0, [P < 0.0001, hazard ratio (HR) = 4], whereas OS was 50% versus 80%, respectively, (P = 0.0005, HR = 3.1). In multivariate analysis, TMTV0 was the only significant independent parameter for both PFS and OS. TMTV0, combined with PIT, discriminated even better than TMTV0 alone, patients with an adverse outcome (TMTV0 >230 cm(3) and PIT >1, n = 33,) from those with good prognosis (TMTV0 ≤230 cm(3) and PIT ≤1, n = 40): 19% versus 73% 2-year PFS (P < 0.0001) and 43% versus 81% 2-year OS, respectively (P = 0.0002). Thirty-one patients (other TMTV0-PIT combinations) had an intermediate outcome, 50% 2-year PFS and 68% 2-year OS. CONCLUSION: TMTV0 appears as an independent predictor of PTCL outcome. Combined with PIT, it could identify different risk categories at diagnosis and warrants further validation as a prognostic marker.
Resumo:
Perceived patient value is often not aligned with the emerging expenses for health care services. In other words, the costs are often supposed as rising faster than the actual value for the patients. This fact is causing major concerns to governments, health plans, and individuals. Attempts to solve the problem have habitually been on the operational effectiveness side: increasing patient volume, minimizing costs, rationing, or closing hospitals, usually resulting in a zero-sum game. Only few approaches come from the strategic positioning side and "competition" among hospitals is still perceived rather as a danger than as a chance to create a positive-sum game and stimulate patient value. In their 2006 book, "Redefining Health Care", the renowned Harvard strategy professor Michael E. Porter and hospital management expert Professor Elizabeth Olmsted Teisberg approach the challenge from the positive-sum perspective: they propose to form Integrated Practice Units (IPUs) and manage hospitals in a modern, patient value oriented way. They argue that creating value-based competition on results should have the same effect on the health care sector like transparency and competition turned other industries with out-dated management models (like recently the inert telecommunication industry) into highly competitive and customer value creating businesses. The objective of this paper is to elaborate Care Delivery Value Chains for Integrated Practice Units in ophthalmic clinics and gather a first feedback from Swiss hospital managers, ophthalmologists, and patients, if such an approach could be a realistic way to improve health care management. First, Porter's definition of competitiveness (distinction between operational effectiveness and strategic positioning) is explained. Then, the Care Delivery Value Chain is introduced as a key element for understanding value-based management, followed by three practice examples for ophthalmic clinics. Finally, recommendations are given how the Care Delivery Value Chain can be managed efficiently and how the obstacles of becoming a patient-oriented organization can be overcome. The conclusion is that increased transparency and value-based competition on results has the potential to change the mindset of hospital managers-which will align patient value with the emerging health care expenses. Early adapters of this management approach will gain a competitive advantage. [Author, p. 6]
Resumo:
Tutkielman tavoitteena on selvittää osakeyhtiön palkitsemisjärjestelmiin liittyviä kysymyksiä päämies–agenttisuhteiden ja osakeyhtiöoikeudellisten periaatteiden kautta. Tutkielmassa selvitetään, kuinka maksuttomia osakkeita voidaan hyödyntää julkisen osakeyhtiön palkitsemisjärjestelmissä ja mitä haasteita ja mahdollisuuksia niiden käyttöön liittyy. Tutkimusmetodi työssä on lainopillinen ja aihetta lähestytään luontevasti kauppatieteellisen ja oikeustieteellisen näkökulman yhdistävän oikeustaloustieteen kautta. Ensisijaisena aineistona tutkielmassa käytetään voimassaolevaa lainsäädäntöä valmisteluaineistoineen ja näkökulmaa syvennetään aiemmin voimassa olleen lainsäädännön tarkastelulla. Toissijaisena aineistona tutkielmassa käytetään sekä eri asiantuntijoiden oikeudellista kirjallisuutta että palkitsemisen asiantuntijoiden laatimaa kirjallisuutta Suomesta ja muista länsimaista. Teoreettista työtä on syvennetty asiantuntijahaastattelulla ja yhtiöiden sijoittajatiedosta saatavalla informaatiolla. Merkittävin osakeyhtiön johdon ja yhtiön välisen suhteen tehokkuutta alentava seikka on päämies–agenttisuhteen valvontaongelma. Tämä liittyy päämies - agenttisuhteeseen, joka ilmenee negatiivisesti yhtiön ja yhtiön johdon intressien ristiriitatilanteissa. Valvontaongelma voi aiheuttaa ylimääräisiä transaktiokustannuksia ja tämä näkyy yhtiön toiminnan tehokkuuden laskemisena. Päämies–agenttisuhteen ratkaisuna lainsäädäntö ja erilaiset valvonnan keinot ovat tehottomia ja nykyisen yhtiöoikeudellisen ajattelun vastaisia. Tehokkaimmin valvontaongelma saadaan ratkaistua erilaisin kannustimin tapahtuvalla johdon ohjauksella. Osakesidonnainen palkitseminen on suosituin johdon ja yhtiön intressien yhdistämisen keino. Osakesidonnainen palkitseminen on parhaimmillaan eri osapuolien näkökulmasta sitouttavaa ja kannustavaa mutta osakepalkitsemisen käyttöön liittyy myös riskejä. Eräs keskeisistä yhtiöoikeudellisista periaatteista on yhdenvertaisuus, jota saatetaan loukata eri palkitsemisjärjestelmiä käytettäessä. Varojen jakoon ja järjestelmien rahoitukseen liittyy niin ikään riskejä, jotka saattavat vaarantaa järjestelmän onnistumisen. Liian avokätiset palkkiojärjestelmät taas saattavat aiheuttaa yhtiön eri sidosryhmien piirissä tyytymättömyyttä joka taas alentaa järjestelmän tehokkuutta. Yhtiön johto vastaa yhtiön strategian toteutumisesta ja linjaa Corporate Governance käytännön mukaisesti johdon palkitsemisen tavat. Yhtiön johto on kuitenkin päätöksistään vastuussa yhtiön residuaalioikeuden omaaville päämiehille, eli osakkeenomistajille. Vaikka yhtiön johto vastaa viime kädessä yhtiön toiminnasta, sen on huomioitava toiminnassaan yhtiön toiminnan tarkoitus ja sitä kautta yksittäisen osakkeenomistajan etu. Vaikka osakeyhtiössä toteutettaisiin enemmistöomistajan valitsemaa toimintalinjaa, osakeyhtiön yhdenvertaisuusperiaate korostaa juuri vähemmistöosakkeenomistajan asemaa. Johdon fidusiaaristen velvoitteiden voidaankin nähdä korostuvan johdon suhteessa vähemmistöosakkeenomistajaan. Tämä on huomioitava myös palkitsemisjärjestelmissä. Johdon palkitsemisjärjestelmien suunnittelussa ja toteutuksessa on suunnattava keskeinen huomio sen tavoitteiden toteutumiseen, eli yksittäisen osakkeenomistajan omistuksen arvon kasvattamiseen pidemmällä aikavälillä, pitäen huolta osakeyhtiön kantavista periaatteista.
Resumo:
Analyzing the state of the art in a given field in order to tackle a new problem is always a mandatory task. Literature provides surveys based on summaries of previous studies, which are often based on theoretical descriptions of the methods. An engineer, however, requires some evidence from experimental evaluations in order to make the appropriate decision when selecting a technique for a problem. This is what we have done in this paper: experimentally analyzed a set of representative state-of-the-art techniques in the problem we are dealing with, namely, the road passenger transportation problem. This is an optimization problem in which drivers should be assigned to transport services, fulfilling some constraints and minimizing some function cost. The experimental results have provided us with good knowledge of the properties of several methods, such as modeling expressiveness, anytime behavior, computational time, memory requirements, parameters, and free downloadable tools. Based on our experience, we are able to choose a technique to solve our problem. We hope that this analysis is also helpful for other engineers facing a similar problem
Resumo:
The objective of this thesis was to study the removal of gases from paper mill circulation waters experimentally and to provide data for CFD modeling. Flow and bubble size measurements were carried out in a laboratory scale open gas separation channel. Particle Image Velocimetry (PIV) technique was used to measure the gas and liquid flow fields, while bubble size measurements were conducted using digital imaging technique with back light illumination. Samples of paper machine waters as well as a model solution were used for the experiments. The PIV results show that the gas bubbles near the feed position have the tendency to escape from the circulation channel at a faster rate than those bubbles which are further away from the feed position. This was due to an increased rate of bubble coalescence as a result of the relatively larger bubbles near the feed position. Moreover, a close similarity between the measured slip velocities of the paper mill waters and that of literature values was obtained. It was found that due to dilution of paper mill waters, the observed average bubble size was considerably large as compared to the average bubble sizes in real industrial pulp suspension and circulation waters. Among the studied solutions, the model solution has the highest average drag coefficient value due to its relatively high viscosity. The results were compared to a 2D steady sate CFD simulation model. A standard Euler-Euler k-ε turbulence model was used in the simulations. The channel free surface was modeled as a degassing boundary. From the drag models used in the simulations, the Grace drag model gave velocity fields closest to the experimental values. In general, the results obtained from experiments and CFD simulations are in good qualitative agreement.
Resumo:
The Mg-vacancy binding free enthalpy of Al-Cr solid solution alloys with Mg addition was calculated by electrical resistivity measurements. The obtained value is lower than that obtained for dilute Al-Mg alloys with almost the same Mg content and may be attributed to the diffusion of Mg.
Resumo:
In this article I present a possible solution for the classic problem of the apparent incompatibility between Mill's Greatest Happiness Principle and his Principle of Liberty arguing that in the other-regarding sphere the judgments of experience and knowledge accumulated through history have moral and legal force, whilst in the self-regarding sphere the judgments of the experienced people only have prudential value and the reason for this is the idea according to which each of us is a better judge than anyone else to decide what causes us pain and which kind of pleasure we prefer (the so-called epistemological argument). Considering that the Greatest Happiness Principle is nothing but the aggregate of each person's happiness, given the epistemological claim we conclude that, by leaving people free even to cause harm to themselves, we still would be maximizing happiness, so both principles (the Greatest Happiness Principle and the Principle of Liberty) could be compatible.
Resumo:
Appearance of the vibration is the very important problem in long tool turning and milling. Current solutions of minimizing vibrations provided by different tool suppliers are very expensive. This Master’s Thesis is presenting the new type of vibration free machining tools produced by Konepaja ASTEX Gear Oy that have cheaper production costs compare to competitors’ products. Vibration problems in machining and their today’s solutions are analyzed in this work. The new vibration damping invention is presented and described. Moreover, the production, laboratory experimental modal analysis and practical testing of the new vibration free prototypes are observed and analyzed on the pages of this Thesis. Based on the testing results the new invention is acknowledged to be successful and approved for further studies and developments.
Resumo:
The purpose of this thesis is to study factors that have an impact on the company’s capabilities to identify and analyze the value of digitalization of services during the early stages of service development process and evaluate them from the perspective of a case company. The research problem was defined: “How digitalization of services affects delivering the services of the future?” The research method of this thesis was based on the qualitative case study which aimed to study both company’s and customer’s set of values. The study included a literature review and a development study. The empirical research part consisted of analyzing three existing services, specifying a new digital service concept and its feasibility analysis as part of a business requirement phase. To understand the set of values, 10 stakeholder interviews were conducted and earlier customer surveys were utilized, and additionally, a number of meetings were conducted with the case company representatives to develop service concept, and evaluate the findings. The impact of the early stages of service development process discovered to reflect directly in the capabilities of the case company to identify and create customer value were related to the themes presented in the literature review. In order to specify the value achieved from the digitalization the following areas of strategic background elements were deepened during the study: Innovations, customer understanding and business service. Based on the findings, the study aims to enhance the case company’s capability to identify and evaluate the impact of the digitalization in delivering services of the future. Recognizing the value of digital service before the beginning of the development project is important to the businesses of both customer and provider. By exploring the various levels of digitalization one can get the overall picture of the value gained from utilizing digital opportunities. From the development perspective, the process of reviewing and discovering the most promising opportunities and solutions is the key step in order to deliver superior services. Ultimately, a company should understand the value outcome determination of the individual services as well as their digital counterparts.
Resumo:
Sequestration of carbon dioxide in mineral rocks, also known as CO2 Capture and Mineralization (CCM), is considered to have a huge potential in stabilizing anthropogenic CO2 emissions. One of the CCM routes is the ex situ indirect gas/sold carbonation of reactive materials, such as Mg(OH)2, produced from abundantly available Mg-silicate rocks. The gas/solid carbonation method is intensively researched at Åbo Akademi University (ÅAU ), Finland because it is energetically attractive and utilizes the exothermic chemistry of Mg(OH)2 carbonation. In this thesis, a method for producing Mg(OH)2 from Mg-silicate rocks for CCM was investigated, and the process efficiency, energy and environmental impact assessed. The Mg(OH)2 process studied here was first proposed in 2008 in a Master’s Thesis by the author. At that time the process was applied to only one Mg-silicate rock (Finnish serpentinite from the Hitura nickel mine site of Finn Nickel) and the optimum process conversions, energy and environmental performance were not known. Producing Mg(OH)2 from Mg-silicate rocks involves a two-staged process of Mg extraction and Mg(OH)2 precipitation. The first stage extracts Mg and other cations by reacting pulverized serpentinite or olivine rocks with ammonium sulfate (AS) salt at 400 - 550 oC (preferably < 450 oC). In the second stage, ammonia solution reacts with the cations (extracted from the first stage after they are leached in water) to form mainly FeOOH, high purity Mg(OH)2 and aqueous (dissolved) AS. The Mg(OH)2 process described here is closed loop in nature; gaseous ammonia and water vapour are produced from the extraction stage, recovered and used as reagent for the precipitation stage. The AS reagent is thereafter recovered after the precipitation stage. The Mg extraction stage, being the conversion-determining and the most energy-intensive step of the entire CCM process chain, received a prominent attention in this study. The extraction behavior and reactivity of different rocks types (serpentinite and olivine rocks) from different locations worldwide (Australia, Finland, Lithuania, Norway and Portugal) was tested. Also, parametric evaluation was carried out to determine the optimal reaction temperature, time and chemical reagent (AS). Effects of reactor types and configuration, mixing and scale-up possibilities were also studied. The Mg(OH)2 produced can be used to convert CO2 to thermodynamically stable and environmentally benign magnesium carbonate. Therefore, the process energy and life cycle environmental performance of the ÅAU CCM technique that first produces Mg(OH)2 and the carbonates in a pressurized fluidized bed (FB) were assessed. The life cycle energy and environmental assessment approach applied in this thesis is motivated by the fact that the CCM technology should in itself offer a solution to what is both an energy and environmental problem. Results obtained in this study show that different Mg-silicate rocks react differently; olivine rocks being far less reactive than serpentinite rocks. In summary, the reactivity of Mg-silicate rocks is a function of both the chemical and physical properties of rocks. Reaction temperature and time remain important parameters to consider in process design and operation. Heat transfer properties of the reactor determine the temperature at which maximum Mg extraction is obtained. Also, an increase in reaction temperature leads to an increase in the extent of extraction, reaching a maximum yield at different temperatures depending on the reaction time. Process energy requirement for producing Mg(OH)2 from a hypothetical case of an iron-free serpentine rock is 3.62 GJ/t-CO2. This value can increase by 16 - 68% depending on the type of iron compound (FeO, Fe2O3 or Fe3O4) in the mineral. This suggests that the benefit from the potential use of FeOOH as an iron ore feedstock in iron and steelmaking should be determined by considering the energy, cost and emissions associated with the FeOOH by-product. AS recovery through crystallization is the second most energy intensive unit operation after the extraction reaction. However, the choice of mechanical vapor recompression (MVR) over the “simple evaporation” crystallization method has a potential energy savings of 15.2 GJ/t-CO2 (84 % savings). Integrating the Mg(OH)2 production method and the gas/solid carbonation process could provide up to an 25% energy offset to the CCM process energy requirements. Life cycle inventory assessment (LCIA) results show that for every ton of CO2 mineralized, the ÅAU CCM process avoids 430 - 480 kg CO2. The Mg(OH)2 process studied in this thesis has many promising features. Even at the current high energy and environmental burden, producing Mg(OH)2 from Mg-silicates can play a significant role in advancing CCM processes. However, dedicated future research and development (R&D) have potential to significantly improve the Mg(OH)2 process performance.
Resumo:
The aim of this study was to investigate the diagnosis delay and its impact on the stage of disease. The study also evaluated a nuclear DNA content, immunohistochemical expression of Ki-67 and bcl-2, and the correlation of these biological features with the clinicopathological features and patient outcome. 200 Libyan women, diagnosed during 2008–2009 were interviewed about the period from the first symptoms to the final histological diagnosis of breast cancer. Also retrospective preclinical and clinical data were collected from medical records on a form (questionnaire) in association with the interview. Tumor material of the patients was collected and nuclear DNA content analysed using DNA image cytometry. The expression of Ki-67 and bcl-2 were assessed using immunohistochemistry (IHC). The studies described in this thesis show that the median of diagnosis time for women with breast cancer was 7.5 months and 56% of patients were diagnosed within a period longer than 6 months. Inappropriate reassurance that the lump was benign was an important reason for prolongation of the diagnosis time. Diagnosis delay was also associated with initial breast symptom(s) that did not include a lump, old age, illiteracy, and history of benign fibrocystic disease. The patients who showed diagnosis delay had bigger tumour size (p<0.0001), positive lymph nodes (p<0.0001), and high incidence of late clinical stages (p<0.0001). Biologically, 82.7% of tumors were aneuploid and 17.3% were diploid. The median SPF of tumors was 11% while the median positivity of Ki-67 was 27.5%. High Ki-67 expression was found in 76% of patients, and high SPF values in 56% of patients. Positive bcl-2 expression was found in 62.4% of tumors. 72.2% of the bcl-2 positive samples were ER-positive. Patients who had tumor with DNA aneuploidy, high proliferative activity and negative bcl-2 expression were associated with a high grade of malignancy and short survival. The SPF value is useful cell proliferation marker in assessing prognosis, and the decision cut point of 11% for SPF in the Libyan material was clearly significant (p<0.0001). Bcl-2 is a powerful prognosticator and an independent predictor of breast cancer outcome in the Libyan material (p<0.0001). Libyan breast cancer was investigated in these studies from two different aspects: health services and biology. The results show that diagnosis delay is a very serious problem in Libya and is associated with complex interactions between many factors leading to advanced stages, and potentially to high mortality. Cytometric DNA variables, proliferative markers (Ki-67 and SPF), and oncoprotein bcl-2 negativity reflect the aggressive behavior of Libyan breast cancer and could be used with traditional factors to predict the outcome of individual patients, and to select appropriate therapy.
Resumo:
This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.
Resumo:
This study discusses the procedures of value co-creation that persist in gaming industry. The purpose of this study was to identify the procedures that persist in current video gaming industry which answers the main research problem how value is co-created in video gaming industry followed by three sub questions: (i) What is value co-creation in gaming industry? (ii) Who participates in value co-creation in gaming industry? (iii) What are the procedures that are involved in value co-creation in gaming industry? The theoretical background of the study consists of literature relating to the theory of marketing i.e., notion of value, conventional understanding of value creation, value chain, co-creation approach, co-production approach. The research adopted qualitative research approach. As a platform of relationship researcher used web 2.0 tool interface. Data were collected from the social networks and netnography method was applied for analyzing them. Findings show that customer and company both co-create optimum level of value while they interact with each other and within the customers as well. However mostly the C2C interaction, discussions and dialogues threads that emerged around the main discussion facilitated to co-create value. In this manner, companies require exploiting and further motivating, developing and supporting the interactions between customers participating in value creation. Hierarchy of value co-creation processes is the result derived from the identified challenges of value co-creation approach and discussion forums data analysis. Overall three general sets and seven topics were found that explored the phenomenon of customer to customer (C2C) and business to customer (B2C) interaction/debating for value co-creation through user generated contents. These topics describe how gamer contributes and interacts in co-creating value along with companies. A methodical quest in current research literature acknowledged numerous evolving flows of value in this study. These are general management perspective, new product development and innovation, virtual customer environment, service science and service dominant logic. Overall the topics deliver various realistic and conceptual implications for using and handling gamers in social networks for augmenting customers’ value co-creation process.
Resumo:
By recent years the phenomenon called crowdsourcing has been acknowledged as an innovative form of value creation that must be taken seriously. Crowdsourcing can be defined as an act of outsourcing tasks originally performed inside an organization, or assigned externally in form of a business relationship, to an undefinably large, heterogeneous mass of potential actors. This thesis constructs a framework for successful implementation of crowdsourcing initiatives. Firms that rely entirely on their own research and ideas cannot compete with the innovative capacity that crowd-powered firms have. Nowadays, crowdsourcing has become one of the key capabilities of businesses due to its innovative capabilities, in addition to the existing internal resources of the firm. By utilizing crowdsourcing the business gains access to an enormous pool of competence and knowledge. However, various risks remain such as uncertainty of crowd structure and loss of internal know-how. Crowdsourcing Success Framework introduces a step by step model for implementing crowdsourcing into the everyday operations of the business. It starts from the decision to utilize crowdsourcing and continues further into planning, organizing and execution. Finally, this thesis presents the success factors of crowdsourcing initiative.