45 resultados para test data generation
Resumo:
The economical competitiveness of various power plant alternatives is compared. The comparison comprises merely electricity producing power plants. Combined heat and power (CHP) producing power will cover part of the future power deficit in Finland, but also condensing power plants for base load production will be needed. The following types of power plants are studied: nuclear power plant, combined cycle gas turbine plant, coal-fired condensing power plant, peat-fired condensing power plant, wood-fired condensing power plant and wind power plant. The calculations are carried out by using the annuity method with a real interest rate of 5 % per annum and with a fixed price level as of January 2008. With the annual peak load utilization time of 8000 hours (corresponding to a load factor of 91,3 %) the production costs would be for nuclear electricity 35,0 €/MWh, for gas based electricity 59,2 €/MWh and for coal based electricity 64,4 €/MWh, when using a price of 23 €/tonCO2 for the carbon dioxide emission trading. Without emission trading the production cost of gas electricity is 51,2 €/MWh and that of coal electricity 45,7 €/MWh and nuclear remains the same (35,0 €/MWh) In order to study the impact of changes in the input data, a sensitivity analysis has been carried out. It reveals that the advantage of the nuclear power is quite clear. E.g. the nuclear electricity is rather insensitive to the changes of nuclear fuel price, whereas for natural gas alternative the rising trend of gas price causes the greatest risk. Furthermore, increase of emission trading price improves the competitiveness of the nuclear alternative. The competitiveness and payback of the nuclear power investment is studied also as such by using various electricity market prices for determining the revenues generated by the investment. The profitability of the investment is excellent, if the market price of electricity is 50 €/MWh or more.
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.
Resumo:
The environmental impact of landfill is a growing concern in waste management practices. Thus, assessing the effectiveness of the solutions implemented to alter the issue is of importance. The objectives of the study were to provide an insight of landfill advantages, and to consolidate landfill gas importance among others alternative fuels. Finally, a case study examining the performances of energy production from a land disposal at Ylivieska was carried out to ascertain the viability of waste to energy project. Both qualitative and quantitative methods were applied. The study was conducted in two parts; the first was the review of literatures focused on landfill gas developments. Specific considerations were the conception of mechanism governing the variability of gas production and the investigation of mathematical models often used in landfill gas modeling. Furthermore, the analysis of two main distributed generation technologies used to generate energy from landfill was carried out. The review of literature revealed a high influence of waste segregation and high level of moisture content for waste stabilization process. It was found that the enhancement in accuracy for forecasting gas rate generation can be done with both mathematical modeling and field test measurements. The result of the case study mainly indicated the close dependence of the power output with the landfill gas quality and the fuel inlet pressure.
Resumo:
Työn tarkoituksena on selvittää miten sähköistä kysynnän herättämistä voidaan hyödyntää Mantsinen Group Ltd Oy:ssä siten, että sillä pystytään tukemaan myyntiä. Lisäksi sähköisen kysynnän herättämisen tehokkuutta tutkitaan, jotta saadaan selville onko se kannattavaa ja kuinka hyvin se sopii yritykselle. Kysynnän herättämisjärjestelmän käyttö on määritelty kirjallisuuteen perustuen ja sen jälkeen järjestelmän käyttö on aloitettu. Sähköisen kysynnän herättämisen tehokkuus mitataan kolmen kuukauden tarkastelujakson todellisella datalla. Sähköisen kysynnän herättämisen sopivuutta arvioidaan perustuen sen kustannustehokkuuteen ja tuloksiin. Työn tulokset osoittavat, että sähköinen kysynnän herättäminen on kannattavaa ja se sopii yritykselle. Sillä voidaan parhaiten tukea myyntiä järjestelmän tuottaessa laadukkaita myyntimahdollisuuksia tasaisena virtana myynnille. Myös aiemmin manuaalisesti tehtyjä työtehtäviä voidaan automatisoida ja näin vähentää myyjien työtaakkaa.
Resumo:
In this thesis, the components important for testing work and organisational test process are identified and analysed. This work focuses on the testing activities in reallife software organisations, identifying the important test process components, observing testing work in practice, and analysing how the organisational test process could be developed. Software professionals from 14 different software organisations were interviewed to collect data on organisational test process and testing‐related factors. Moreover, additional data on organisational aspects was collected with a survey conducted on 31 organisations. This data was further analysed with the Grounded Theory method to identify the important test process components, and to observe how real‐life test organisations develop their testing activities. The results indicate that the test management at the project level is an important factor; the organisations do have sufficient test resources available, but they are not necessarily applied efficiently. In addition, organisations in general are reactive; they develop their process mainly to correct problems, not to enhance their efficiency or output quality. The results of this study allows organisations to have a better understanding of the test processes, and develop towards better practices and a culture of preventing problems, not reacting to them.
Resumo:
CHARGE syndrome, Sotos syndrome and 3p deletion syndrome are examples of rare inherited syndromes that have been recognized for decades but for which the molecular diagnostics only have been made possible by recent advances in genomic research. Despite these advances, development of diagnostic tests for rare syndromes has been hindered by diagnostic laboratories having limited funds for test development, and their prioritization of tests for which a (relatively) high demand can be expected. In this study, the molecular diagnostic tests for CHARGE syndrome and Sotos syndrome were developed, resulting in their successful translation into routine diagnostic testing in the laboratory of Medical Genetics (UTUlab). In the CHARGE syndrome group, mutation was identified in 40.5% of the patients and in the Sotos syndrome group, in 34%, reflecting the use of the tests in routine diagnostics in differential diagnostics. In CHARGE syndrome, the low prevalence of structural aberrations was also confirmed. In 3p deletion syndrome, it was shown that small terminal deletions are not causative for the syndrome, and that testing with arraybased analysis provides a reliable estimate of the deletion size but benign copy number variants complicate result interpretation. During the development of the tests, it was discovered that finding an optimal molecular diagnostic strategy for a given syndrome is always a compromise between the sensitivity, specificity and feasibility of applying a new method. In addition, the clinical utility of the test should be considered prior to test development: sometimes a test performing well in a laboratory has limited utility for the patient, whereas a test performing poorly in the laboratory may have a great impact on the patient and their family. At present, the development of next generation sequencing methods is changing the concept of molecular diagnostics of rare diseases from single tests towards whole-genome analysis.
Resumo:
This thesis focuses on tissue inhibitor of metalloproteinases 4 (TIMP4) which is the newest member of a small gene and protein family of four closely related endogenous inhibitors of extracellular matrix (ECM) degrading enzymes. Existing data on TIMP4 suggested that it exhibits a more restricted expression pattern than the other TIMPs with high expression levels in heart, brain, ovary and skeletal muscle. These observations and the fact that the ECM is of special importance to provide the cardiovascular system with structural strength combined with elasticity and distensibility, prompted the present molecular biologic investigation on TIMP4. In the first part of the study the murine Timp4 gene was cloned and characterized in detail. The structure of murine Timp4 genomic locus resembles that in other species and of the other Timps. The highest Timp4 expression was detected in heart, ovary and brain. As the expression pattern of Timp4 gives only limited information about its role in physiology and pathology, Timp4 knockout mice were generated next. The analysis of Timp4 knockout mice revealed that Timp4 deficiency has no obvious effect on the development, growth or fertility of mice. Therefore, Timp4 deficient mice were challenged using available cardiovascular models, i.e. experimental cardiac pressure overload and myocardial infarction. In the former model, Timp4 deficiency was found to be compensated by Timp2 overexpression, whereas in the myocardial infarct model, Timp4 deficiency resulted in increased mortality due to increased susceptibility for cardiac rupture. In the wound healing model, Timp4 deficiency was shown to result in transient retardation of re-epithelialization of cutaneous wounds. Melanoma tumor growth was similar in Timp4 deficient and control mice. Despite of this, lung metastasis of melanoma cells was significantly increased in Timp4 null mice. In an attempt to translate the current findings to patient material, TIMP4 expression was studied in human specimens representing different inflammatory cardiovascular pathologies, i.e. giant cell arteritis, atherosclerotic coronary arteries and heart allografts exhibiting signs of chronic rejection. The results showed that cardiovascular expression of TIMP4 is elevated particularly in areas exhibiting inflammation. The results of the present studies suggest that TIMP4 has a special role in the regulation of tissue repair processes in the heart, and also in healing wounds and metastases. Furthermore, evidence is provided suggesting the usefulness of TIMP4 as a novel systemic marker for vascular inflammation.
Resumo:
Presentation at the Nordic Perspectives on Open Access and Open Science seminar, Helsinki, October 15, 2013
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.
Integration of marketing research data in new product development. Case study: Food industry company
Resumo:
The aim of this master’s thesis is to provide a real life example of how marketing research data is used by different functions in the NPD process. In order to achieve this goal, a case study in a company was implemented where gathering, analysis, distribution and synthesis of marketing research data in NPD were studied. The main research question was formulated as follows: How is marketing research data integrated and used by different company functions in the NPD process? The theory part of the master’s thesis was focused on the discussion of the marketing function role in NPD, use of marketing research particularly in the food industry, as well as issues related to the marketing/R&D interface during the NPD process. The empirical part of the master’s thesis was based on qualitative explanatory case study research. Individual in-depth interviews with company representatives, company documents and online research were used for data collection and analyzed through triangulation method. The empirical findings advocate that the most important marketing data sources at the concept generation stage of NPD are: global trends monitoring, retailing audit and consumers insights. These data sets are crucial for establishing the potential of the product on the market and defining the desired features for the new product to be developed. The findings also suggest the example of successful crossfunctional communication during the NPD process with formal and informal communication patterns. General managerial recommendations are given on the integration in NPD of a strategy, process, continuous improvement, and motivated cross-functional product development teams.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.