57 resultados para large delay
Resumo:
Planar, large area, position sensitive silicon detectors are widely utilized in high energy physics research and in medical, computed tomography (CT). This thesis describes author's research work relating to development of such detector components. The key motivation and objective for the research work has been the development of novel, position sensitive detectors improving the performance of the instruments they are intended for. Silicon strip detectors are the key components of barrel-shaped tracking instruments which are typically the innermost structures of high energy physics experimental stations. Particle colliders such as the former LEP collider or present LHC produce particle collisions and the silicon strip detector based trackers locate the trajectories of particles emanating from such collisions. Medical CT has become a regular part of everyday medical care in all developed countries. CT scanning enables x-ray imaging of all parts of the human body with an outstanding structural resolution and contrast. Brain, chest and abdomen slice images with a resolution of 0.5 mm are possible and latest CT machines are able to image whole human heart between heart beats. The two application areas are presented shortly and the radiation detection properties of planar silicon detectors are discussed. Fabrication methods and preamplifier electronics of the planar detectors are presented. Designs of the developed, large area silicon detectors are presented and measurement results of the key operating parameters are discussed. Static and dynamic performance of the developed silicon strip detectors are shown to be very satisfactory for experimental physics applications. Results relating to the developed, novel CT detector chips are found to be very promising for further development and all key performance goals are met.
Resumo:
Social media is a multidimensional marketing and communications channel which can support and enhance a business’ reputation, sales and even longevity. Social media as a business tool encourages an interaction between customers and companies which gives opportunities for a company to better understand their customers, to target them more effectively and to collaborate and create dialogues with them which is not possible through traditional media channels. The aim of a social media strategy is to increase brand awareness, image, loyalty and recognition. The peer networks that social media creates allows a company to disseminate information through loyal customers to new and prospective customers to ultimately increase reach. The purpose of the study is to understand the marketer’s perspective of social media marketing use and how it is currently utilized in marketing and communications activities in Finland. Three companies were interviewed covering fourteen different implementations of social media marketing campaigns. These were then analysed to ascertain the utilization methods and experience gained on recent campaigns in the Finnish market The utilization of social media marketing was analysed using the methods of thematic analysis and inductive and abductive reasoning. Elements and themes were drawn out of the separate interviews to create a framework with which to explore, evaluate and match theories that define social media usage by companies. It became clear from all of the interviews that social media as a tool is most effective when it captures the viewer’s interest through rich and entertaining content. This directed the theoretical research towards Engagement Theory and Content Marketing which look to emphasize the importance of communities, collaboration, interaction, and peer-sharing as the key drivers of a social media marketing campaign.
Resumo:
The recent digitization, fragmentation of the media landscape and consumers’ changing media behavior are all changes that have had drastic effects on creating marketing communications. In order to create effective marketing communications large advertisers are now co-operating with a variety of marketing communications companies. The purpose of the study is to understand how advertisers perceive these different companies and more importantly how do advertisers expect their roles to change in the future as the media landscape continues to evolve. Especially the changing roles of advertising agencies and media agencies are examined as they are at the moment the most relevant partners of the advertisers. However, the research is conducted from a network perspective rather than focusing on single actors of the marketing communications industry network. The research was conducted using a qualitative theme interview method. The empirical data was gathered by interviewing representatives from nine of the 50 largest Finnish advertisers measured by media spending. Thus, the research was conducted solely from large B2C advertisers’ perspective while the views of their other relevant actors of the network were left unexplored. The interviewees were chosen with a focus on variety of points of view. The analytical framework that was used to analyze the gathered data was built the IMP group’s industrial network model that consists of actors, their resources and activities. As technology driven media landscape fragmentation and consumers’ changing media behavior continue to increase the complexity of creating marketing communications, advertisers are going to need to rely on a growing number of partnerships as they see that the current actors of the network will not be able to widen their expertise to answer to these new needs. The advertisers expect to form new partnerships with actors that are more specialized and able to react and produce activities more quickly than at the moment. Thus, new smaller and more agile actors with looser structures are going to appear to fill these new needs. Therefore, the need of co-operation between the actors is going to become more important. These changes pose the biggest threat for traditional advertising agencies as they were seen as being most unable to cope with the ongoing change. Media agencies are in a more favorable position for remaining relevant for the advertisers as they will be able to justify their activities and provided value by leveraging their data handling abilities. In general the advertisers expect to be working with a limited number of close actors and in addition having a network of smaller actors, which are used on a more ad hoc basis.
Resumo:
Currently, a high penetration level of Distributed Generations (DGs) has been observed in the Danish distribution systems, and even more DGs are foreseen to be present in the upcoming years. How to utilize them for maintaining the security of the power supply under the emergency situations, has been of great interest for study. This master project is intended to develop a control architecture for studying purposes of distribution systems with large scale integration of solar power. As part of the EcoGrid EU Smart Grid project, it focuses on the system modelling and simulation of a Danish representative LV network located in Bornholm island. Regarding the control architecture, two types of reactive control techniques are implemented and compare. In addition, a network voltage control based on a tap changer transformer is tested. The optimized results after applying a genetic algorithm to five typical Danish domestic loads are lower power losses and voltage deviation using Q(U) control, specially with large consumptions. Finally, a communication and information exchange system is developed with the objective of regulating the reactive power and thereby, the network voltage remotely and real-time. Validation test of the simulated parameters are performed as well.
Resumo:
Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.
Resumo:
The aim of this study was to investigate the diagnosis delay and its impact on the stage of disease. The study also evaluated a nuclear DNA content, immunohistochemical expression of Ki-67 and bcl-2, and the correlation of these biological features with the clinicopathological features and patient outcome. 200 Libyan women, diagnosed during 2008–2009 were interviewed about the period from the first symptoms to the final histological diagnosis of breast cancer. Also retrospective preclinical and clinical data were collected from medical records on a form (questionnaire) in association with the interview. Tumor material of the patients was collected and nuclear DNA content analysed using DNA image cytometry. The expression of Ki-67 and bcl-2 were assessed using immunohistochemistry (IHC). The studies described in this thesis show that the median of diagnosis time for women with breast cancer was 7.5 months and 56% of patients were diagnosed within a period longer than 6 months. Inappropriate reassurance that the lump was benign was an important reason for prolongation of the diagnosis time. Diagnosis delay was also associated with initial breast symptom(s) that did not include a lump, old age, illiteracy, and history of benign fibrocystic disease. The patients who showed diagnosis delay had bigger tumour size (p<0.0001), positive lymph nodes (p<0.0001), and high incidence of late clinical stages (p<0.0001). Biologically, 82.7% of tumors were aneuploid and 17.3% were diploid. The median SPF of tumors was 11% while the median positivity of Ki-67 was 27.5%. High Ki-67 expression was found in 76% of patients, and high SPF values in 56% of patients. Positive bcl-2 expression was found in 62.4% of tumors. 72.2% of the bcl-2 positive samples were ER-positive. Patients who had tumor with DNA aneuploidy, high proliferative activity and negative bcl-2 expression were associated with a high grade of malignancy and short survival. The SPF value is useful cell proliferation marker in assessing prognosis, and the decision cut point of 11% for SPF in the Libyan material was clearly significant (p<0.0001). Bcl-2 is a powerful prognosticator and an independent predictor of breast cancer outcome in the Libyan material (p<0.0001). Libyan breast cancer was investigated in these studies from two different aspects: health services and biology. The results show that diagnosis delay is a very serious problem in Libya and is associated with complex interactions between many factors leading to advanced stages, and potentially to high mortality. Cytometric DNA variables, proliferative markers (Ki-67 and SPF), and oncoprotein bcl-2 negativity reflect the aggressive behavior of Libyan breast cancer and could be used with traditional factors to predict the outcome of individual patients, and to select appropriate therapy.
Resumo:
This doctoral thesis introduces an improved control principle for active du/dt output filtering in variable-speed AC drives, together with performance comparisons with previous filtering methods. The effects of power semiconductor nonlinearities on the output filtering performance are investigated. The nonlinearities include the timing deviation and the voltage pulse waveform distortion in the variable-speed AC drive output bridge. Active du/dt output filtering (ADUDT) is a method to mitigate motor overvoltages in variable-speed AC drives with long motor cables. It is a quite recent addition to the du/dt reduction methods available. This thesis improves on the existing control method for the filter, and concentrates on the lowvoltage (below 1 kV AC) two-level voltage-source inverter implementation of the method. The ADUDT uses narrow voltage pulses having a duration in the order of a microsecond from an IGBT (insulated gate bipolar transistor) inverter to control the output voltage of a tuned LC filter circuit. The filter output voltage has thus increased slope transition times at the rising and falling edges, with an opportunity of no overshoot. The effect of the longer slope transition times is a reduction in the du/dt of the voltage fed to the motor cable. Lower du/dt values result in a reduction in the overvoltage effects on the motor terminals. Compared with traditional output filtering methods to accomplish this task, the active du/dt filtering provides lower inductance values and a smaller physical size of the filter itself. The filter circuit weight can also be reduced. However, the power semiconductor nonlinearities skew the filter control pulse pattern, resulting in control deviation. This deviation introduces unwanted overshoot and resonance in the filter. The controlmethod proposed in this thesis is able to directly compensate for the dead time-induced zero-current clamping (ZCC) effect in the pulse pattern. It gives more flexibility to the pattern structure, which could help in the timing deviation compensation design. Previous studies have shown that when a motor load current flows in the filter circuit and the inverter, the phase leg blanking times distort the voltage pulse sequence fed to the filter input. These blanking times are caused by excessively large dead time values between the IGBT control pulses. Moreover, the various switching timing distortions, present in realworld electronics when operating with a microsecond timescale, bring additional skew to the control. Left uncompensated, this results in distortion of the filter input voltage and a filter self-induced overvoltage in the form of an overshoot. This overshoot adds to the voltage appearing at the motor terminals, thus increasing the transient voltage amplitude at the motor. This doctoral thesis investigates the magnitude of such timing deviation effects. If the motor load current is left uncompensated in the control, the filter output voltage can overshoot up to double the input voltage amplitude. IGBT nonlinearities were observed to cause a smaller overshoot, in the order of 30%. This thesis introduces an improved ADUDT control method that is able to compensate for phase leg blanking times, giving flexibility to the pulse pattern structure and dead times. The control method is still sensitive to timing deviations, and their effect is investigated. A simple approach of using a fixed delay compensation value was tried in the test setup measurements. The ADUDT method with the new control algorithm was found to work in an actual motor drive application. Judging by the simulation results, with the delay compensation, the method should ultimately enable an output voltage performance and a du/dt reduction that are free from residual overshoot effects. The proposed control algorithm is not strictly required for successful ADUDT operation: It is possible to precalculate the pulse patterns by iteration and then for instance store them into a look-up table inside the control electronics. Rather, the newly developed control method is a mathematical tool for solving the ADUDT control pulses. It does not contain the timing deviation compensation (from the logic-level command to the phase leg output voltage), and as such is not able to remove the timing deviation effects that cause error and overshoot in the filter. When the timing deviation compensation has to be tuned-in in the control pattern, the precalculated iteration method could prove simpler and equally good (or even better) compared with the mathematical solution with a separate timing compensation module. One of the key findings in this thesis is the conclusion that the correctness of the pulse pattern structure, in the sense of ZCC and predicted pulse timings, cannot be separated from the timing deviations. The usefulness of the correctly calculated pattern is reduced by the voltage edge timing errors. The doctoral thesis provides an introductory background chapter on variable-speed AC drives and the problem of motor overvoltages and takes a look at traditional solutions for overvoltage mitigation. Previous results related to the active du/dt filtering are discussed. The basic operation principle and design of the filter have been studied previously. The effect of load current in the filter and the basic idea of compensation have been presented in the past. However, there was no direct way of including the dead time in the control (except for solving the pulse pattern manually by iteration), and the magnitude of nonlinearity effects had not been investigated. The enhanced control principle with the dead time handling capability and a case study of the test setup timing deviations are the main contributions of this doctoral thesis. The simulation and experimental setup results show that the proposed control method can be used in an actual drive. Loss measurements and a comparison of active du/dt output filtering with traditional output filtering methods are also presented in the work. Two different ADUDT filter designs are included, with ferrite core and air core inductors. Other filters included in the tests were a passive du/dtfilter and a passive sine filter. The loss measurements incorporated a silicon carbide diode-equipped IGBT module, and the results show lower losses with these new device technologies. The new control principle was measured in a 43 A load current motor drive system and was able to bring the filter output peak voltage from 980 V (the previous control principle) down to 680 V in a 540 V average DC link voltage variable-speed drive. A 200 m motor cable was used, and the filter losses for the active du/dt methods were 111W–126 W versus 184 W for the passive du/dt. In terms of inverter and filter losses, the active du/dt filtering method had a 1.82-fold increase in losses compared with an all-passive traditional du/dt output filter. The filter mass with the active du/dt method was 17% (2.4 kg, air-core inductors) compared with 14 kg of the passive du/dt method filter. Silicon carbide freewheeling diodes were found to reduce the inverter losses in the active du/dt filtering by 18% compared with the same IGBT module with silicon diodes. For a 200 m cable length, the average peak voltage at the motor terminals was 1050 V with no filter, 960 V for the all-passive du/dt filter, and 700 V for the active du/dt filtering applying the new control principle.
Resumo:
This thesis studies the possibility of using information on insiders’ transactions to forecast future stock returns after the implementation of Sarbanes Oxley Act in July 2003. Insider transactions between July 2003 and August 2009 are analysed with regression tests to identify the relationships between insiders’ transactions and future stock returns. This analysis is complemented with rudimentary bootstrapping procedures to verify the robustness of the findings. The underlying assumption of the thesis is that insiders constantly receive pieces of information that indicate future performance of the company. They may not be allowed to trade on large and tangible pieces of information but they can trade on accumulation of smaller, intangible pieces of information. Based on the analysis in the thesis insiders’ profits were found not to differ from the returns from broad stock index. However, their individual transactions were found to be linked to future stock returns. The initial model was found to be unstable but some of the predictive power could be sacrificed to achieve greater stability. Even after sacrificing some predictive power the relationship was significant enough to allow external investors to achieve abnormal profits after transaction costs and taxes. The thesis does not go into great detail about timing of transactions. Delay in publishing insiders’ transactions is not taken into account in the calculations and the closed windows are not studied in detail. The potential effects of these phenomena are looked into and they do not cause great changes in the findings. Additionally the remuneration policy of an insider or a company is not taken into account even though it most likely affects the trading patterns of insiders. Even with the limitations the findings offer promising opportunities for investors to improve their investment processes by incorporating additional information from insiders’ transaction into their decisions. The findings also raise questions on how insider trading should be regulated. Insiders achieve greater returns than other investors based on superior information. On the other hand, more efficient information transfer could warrant more lenient regulation. The fact that insiders’ returns are dominated by the large investment stake they maintain all the time in their own companies also speaks for more leniency. As Sarbanes Oxley Act considerably modified the insider trading landscape, this analysis provides information that has not been available before. The thesis also constitutes a thorough analysis of insider trading phenomenon which has previously been somewhat separated into several studies.
Resumo:
Adapting and scaling up agile concepts, which are characterized by iterative, self-directed, customer value focused methods, may not be a simple endeavor. This thesis concentrates on studying challenges in a large-scale agile software development transformation in order to enhance understanding and bring insight into the underlying factors for such emerging challenges. This topic is approached through understanding the concepts of agility and different methods compared to traditional plan-driven processes, complex adaptive theory and the impact of organizational culture on agile transformational efforts. The empirical part was conducted by a qualitative case study approach. The internationally operating software development case organization had a year of experience of an agile transformation effort during it had also undergone organizational realignment efforts. The primary data collection was conducted through semi-structured interviews supported by participatory observation. As a result the identified challenges were categorized under four broad themes: organizational, management, team dynamics and process related. The identified challenges indicate that agility is a multifaceted concept. Agile practices may bring visibility in issues of which many are embedded in the organizational culture or in the management style. Viewing software development as a complex adaptive system could facilitate understanding of the underpinning philosophy and eventually solving the issues: interactions are more important than processes and solving a complex problem, such a novel software development, requires constant feedback and adaptation to changing requirements. Furthermore, an agile implementation seems to be unique in nature, and agents engaged in the interaction are the pivotal part of the success of achieving agility. In case agility is not a strategic choice for whole organization, it seems additional issues may arise due to different ways of working in different parts of an organization. Lastly, detailed suggestions to mitigate the challenges of the case organization are provided.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.
Resumo:
Microsoft System Center Configuration Manager is a systems management product for managing large groups of computers and/or mobile devices. It provides operating system deployment, software distribution, patch management, hardware & software inventory, remote control and many other features for the managed clients. This thesis focuses on researching whether this product is suitable for large, international organization with no previous, centralized solution for managing all such networked devices and detecting areas, where the system can be altered to achieve a more optimal management product from the company’s perspective. The results showed that the system is suitable for such organization if properly configured and clear and transparent line of communication between key IT personnel exists.
Resumo:
Wind energy has obtained outstanding expectations due to risks of global warming and nuclear energy production plant accidents. Nowadays, wind farms are often constructed in areas of complex terrain. A potential wind farm location must have the site thoroughly surveyed and the wind climatology analyzed before installing any hardware. Therefore, modeling of Atmospheric Boundary Layer (ABL) flows over complex terrains containing, e.g. hills, forest, and lakes is of great interest in wind energy applications, as it can help in locating and optimizing the wind farms. Numerical modeling of wind flows using Computational Fluid Dynamics (CFD) has become a popular technique during the last few decades. Due to the inherent flow variability and large-scale unsteadiness typical in ABL flows in general and especially over complex terrains, the flow can be difficult to be predicted accurately enough by using the Reynolds-Averaged Navier-Stokes equations (RANS). Large- Eddy Simulation (LES) resolves the largest and thus most important turbulent eddies and models only the small-scale motions which are more universal than the large eddies and thus easier to model. Therefore, LES is expected to be more suitable for this kind of simulations although it is computationally more expensive than the RANS approach. With the fast development of computers and open-source CFD software during the recent years, the application of LES toward atmospheric flow is becoming increasingly common nowadays. The aim of the work is to simulate atmospheric flows over realistic and complex terrains by means of LES. Evaluation of potential in-land wind park locations will be the main application for these simulations. Development of the LES methodology to simulate the atmospheric flows over realistic terrains is reported in the thesis. The work also aims at validating the LES methodology at a real scale. In the thesis, LES are carried out for flow problems ranging from basic channel flows to real atmospheric flows over one of the most recent real-life complex terrain problems, the Bolund hill. All the simulations reported in the thesis are carried out using a new OpenFOAM® -based LES solver. The solver uses the 4th order time-accurate Runge-Kutta scheme and a fractional step method. Moreover, development of the LES methodology includes special attention to two boundary conditions: the upstream (inflow) and wall boundary conditions. The upstream boundary condition is generated by using the so-called recycling technique, in which the instantaneous flow properties are sampled on aplane downstream of the inlet and mapped back to the inlet at each time step. This technique develops the upstream boundary-layer flow together with the inflow turbulence without using any precursor simulation and thus within a single computational domain. The roughness of the terrain surface is modeled by implementing a new wall function into OpenFOAM® during the thesis work. Both, the recycling method and the newly implemented wall function, are validated for the channel flows at relatively high Reynolds number before applying them to the atmospheric flow applications. After validating the LES model over simple flows, the simulations are carried out for atmospheric boundary-layer flows over two types of hills: first, two-dimensional wind-tunnel hill profiles and second, the Bolund hill located in Roskilde Fjord, Denmark. For the twodimensional wind-tunnel hills, the study focuses on the overall flow behavior as a function of the hill slope. Moreover, the simulations are repeated using another wall function suitable for smooth surfaces, which already existed in OpenFOAM® , in order to study the sensitivity of the flow to the surface roughness in ABL flows. The simulated results obtained using the two wall functions are compared against the wind-tunnel measurements. It is shown that LES using the implemented wall function produces overall satisfactory results on the turbulent flow over the two-dimensional hills. The prediction of the flow separation and reattachment-length for the steeper hill is closer to the measurements than the other numerical studies reported in the past for the same hill geometry. The field measurement campaign performed over the Bolund hill provides the most recent field-experiment dataset for the mean flow and the turbulence properties. A number of research groups have simulated the wind flows over the Bolund hill. Due to the challenging features of the hill such as the almost vertical hill slope, it is considered as an ideal experimental test case for validating micro-scale CFD models for wind energy applications. In this work, the simulated results obtained for two wind directions are compared against the field measurements. It is shown that the present LES can reproduce the complex turbulent wind flow structures over a complicated terrain such as the Bolund hill. Especially, the present LES results show the best prediction of the turbulent kinetic energy with an average error of 24.1%, which is a 43% smaller than any other model results reported in the past for the Bolund case. Finally, the validated LES methodology is demonstrated to simulate the wind flow over the existing Muukko wind farm located in South-Eastern Finland. The simulation is carried out only for one wind direction and the results on the instantaneous and time-averaged wind speeds are briefly reported. The demonstration case is followed by discussions on the practical aspects of LES for the wind resource assessment over a realistic inland wind farm.