15 resultados para dynamic mathematics software

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Diplomityössä tutkitaan keinoja brändätä ja varioida S60-ohjelmistoja dynaamisesti ja ajonaikaisesti. S60 on kehitysalusta, jota käyttävät useat puhelinvalmistajat ja heidän puhelimiaan käyttävät lukuisat eri operaattorit. Operaattorit haluavat puhelimiensa tai osan puhelimen sovelluksista erottuvan kilpailijoista heidän omalla brändillään ja tämän takia täytyy olla keinot joko koko puhelimen, tai valittujen sovellusten brändäykselle. Osa sovelluksista saatetaan haluta vaihtavan käytettyä brändiä sen käyttämien resurssien, kuten verkkopalvelimen, mukaan. Variointidataa tulee myös pystyä jakamaan eri sovellusten tai sovellusten osien kesken. Työssä esitellään Symbian käyttöjärjestelmä ja S60 kehitysympäristö, sekä pohditaan Symbianin turvallisuuskäytäntöjen tuomia haasteita variointidatan jakamiseen eri sovellusten välillä. Olemassaolevia variointitapoja tutkitaan työn mahdolliseksi pohjaksi. Työ sisältää esittelyn projektista, jossa kehitettiin erään S60 sovelluksen dynaaminen brändäystoteutus, joka myös mahdollistaa variointidatan jakamisen eri sovellusten kanssa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tämä diplomityö arvioi hitsauksen laadunhallintaohjelmistomarkkinoiden kilpailijoita. Kilpailukenttä on uusi ja ei ole tarkkaa tietoa siitä minkälaisia kilpailijoita on markkinoilla. Hitsauksen laadunhallintaohjelmisto auttaa yrityksiä takaamaan korkean laadun. Ohjelmisto takaa korkean laadun varmistamalla, että hitsaaja on pätevä, hän noudattaa hitsausohjeita ja annettuja parametreja. Sen lisäksi ohjelmisto kerää kaiken tiedon hitsausprosessista ja luo siitä vaadittavat dokumentit. Diplomityön teoriaosuus muodostuu kirjallisuuskatsauksesta ratkaisuliike-toimintaan, kilpailija-analyysin ja kilpailuvoimien teoriaan sekä hitsauksen laadunhallintaan. Työn empiriaosuus on laadullinen tutkimus, jossa tutkitaan kilpailevia hitsauksen laadunhallintaohjelmistoja ja haastatellaan ohjelmistojen käyttäjiä. Diplomityön tuloksena saadaan uusi kilpailija-analyysimalli hitsauksen laadunhallintaohjelmistoille. Mallin avulla voidaan arvostella ohjelmistot niiden tarjoamien primääri- ja sekundääriominaisuuksien perusteella. Toiseksi tässä diplomityössä analysoidaan nykyinen kilpailijatilanne hyödyntämällä juuri kehitettyä kilpailija-analyysimallia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monimutkaisen tietokonejärjestelmän suorituskykyoptimointi edellyttää järjestelmän ajonaikaisen käyttäytymisen ymmärtämistä. Ohjelmiston koon ja monimutkaisuuden kasvun myötä suorituskykyoptimointi tulee yhä tärkeämmäksi osaksi tuotekehitysprosessia. Tehokkaampien prosessorien käytön myötä myös energiankulutus ja lämmöntuotto ovat nousseet yhä suuremmiksi ongelmiksi, erityisesti pienissä, kannettavissa laitteissa. Lämpö- ja energiaongelmien rajoittamiseksi on kehitetty suorituskyvyn skaalausmenetelmiä, jotka edelleen lisäävät järjestelmän kompleksisuutta ja suorituskykyoptimoinnin tarvetta. Tässä työssä kehitettiin visualisointi- ja analysointityökalu ajonaikaisen käyttäytymisen ymmärtämisen helpottamiseksi. Lisäksi kehitettiin suorituskyvyn mitta, joka mahdollistaa erilaisten skaalausmenetelmien vertailun ja arvioimisen suoritusympäristöstä riippumatta, perustuen joko suoritustallenteen tai teoreettiseen analyysiin. Työkalu esittää ajonaikaisesti kerätyn tallenteen helposti ymmärrettävällä tavalla. Se näyttää mm. prosessit, prosessorikuorman, skaalausmenetelmien toiminnan sekä energiankulutuksen kolmiulotteista grafiikkaa käyttäen. Työkalu tuottaa myös käyttäjän valitsemasta osasta suorituskuvaa numeerista tietoa, joka sisältää useita oleellisia suorituskykyarvoja ja tilastotietoa. Työkalun sovellettavuutta tarkasteltiin todellisesta laitteesta saatua suoritustallennetta sekä suorituskyvyn skaalauksen simulointia analysoimalla. Skaalausmekanismin parametrien vaikutus simuloidun laitteen suorituskykyyn analysoitiin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research report presents an application of systems theory to evaluating intellectual capital (IC) as organization's ability for self-renewal. As renewal ability is a dynamic capability of an organization as a whole, rather than a static asset or an atomistic competence of separate individuals within the organization, it needs to be understood systemically. Consequently, renewal ability has to be measured with systemic methods that are based on a thorough conceptual analysis of systemic characteristics of organizations. The aim of this report is to demonstrate the theory and analysis methodology for grasping companies' systemic efficiency and renewal ability. The volume is divided into three parts. The first deals with the theory of organizations as self-renewing systems. In the second part, the principles of quantitative analysis of organizations are laid down. Finally, the detailed mathematics of the renewal indices are presented. We also assert that the indices produced by the analysis are an effective tool for the management and valuation of knowledge-intensive companies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diplomityössä tutkitaan, kuinka Symbian-sovelluskehitystä voitaisiin tehostaa. Työssä esitellään Symbian-käyttöjärjestelmä, sekä pohditaan haasteita ja rajoitteita joita Symbian sovelluskehityksessä kohdataan. Myöskin jo olemassa olevia kehitystapoja pohditaan työn tavoitteen kannalta. Symbian-sovelluskehityksessä tehdään toistuvasti samoja asioita. Koska Symbian on avoin käyttöjärjestelmä, sovelluskehittäjiä on paljon. Tehokkaamman kehitystavan löytäminen säästäisi paljon resursseja. Tällä hetkellä perinteiset ohjelmointitavat näyttävät olevan suosituin tapa kehittää sovelluksia. Kuitenkin on jo olemassa useita ratkaisuja, jotka pyrkivät tehostamaan sovelluskehitystä, mikä todistaa tarpeen kehittää tehokkuutta. Työssä toteutettu systeemi ajaa Symbian sovelluksia XML-määrityksen pohjalta. Kun käytetään XML-määritystä C++-koodin sijasta, sovelluskehitys muuttuu. Näiden muutosten täytyy kuitenkin olla myönteisiä, eivätkä ne saa haitata ohjelmiston laatua tai käytettävyyttä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the study was to create and evaluate an intervention programme for Tanzanian children from a low-income area who are at risk of reading and writing difficulties. The learning difficulties, including reading and writing difficulties, are likely to be behind many of the common school problems in Tanzania, but they are not well understood, and research is needed. The design of the study included an identification and intervention phase with follow-up. A group based dynamic assessment approach was used in identifying children at risk of difficulties in reading and writing. The same approach was used in the intervention. The study was a randomized experiment with one experimental and two control groups. For the experimental and the control groups, a total of 96 (46 girls and 50 boys) children from grade one were screened out of 301 children from two schools in a low income urban area of Dar-es-Salaam. One third of the children, the experimental group, participated in an intensive training programme in literacy skills for five weeks, six hours per week, aimed at promoting reading and writing ability, while the children in the control groups had a mathematics and art programme. Follow-up was performed five months after the intervention. The intervention programme and the tests were based on the Zambian BASAT (Basic Skill Assessment Tool, Ketonen & Mulenga, 2003), but the content was drawn from the Kiswahili school curriculum in Tanzania. The main components of the training and testing programme were the same, only differing in content. The training process was different from traditional training in Tanzanian schools in that principles of teaching and training in dynamic assessment were followed. Feedback was the cornerstone of the training and the focus was on supporting the children in exploring knowledge and strategies in performing the tasks. The experimental group improved significantly more (p = .000) than the control groups during the intervention from pre-test to follow-up (repeated measures ANOVA). No differences between the control groups were noticed. The effect was significant on all the measures: phonological awareness, reading skills, writing skills and overall literacy skills. A transfer effect on school marks in Kiswahili and English was found. Following a discussion of the results, suggestions for further research and adaptation of the programme are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The starting point of this study is that the prevailing way to consider the Finnish IT industries and industry information often results in a limited and even skewed picture of the sector. The purpose of the study is to contribute and increase knowledge and understanding of the status, structure and evolution of the Finnish IT industries as well as the Finnish IT vendor field and competition. The focus is on software product and IT services industries which form a crucial part of all ICT industries. This study examines the Finnish IT sector from production (supply) as well as market (demand) perspective. The study is based on empirical information from multiple sources. Three research questions were formulated for the study. The first concerns the status of the Finnish IT industries considered by applying theoretical frameworks. The second research question targets at the basis for the future evolution of the Finnish IT industries and, finally, the third at the ability of the available definitions and indicators to describe the Finnish IT industries and IT markets. Major structural changes like technological changes and related innovations, globalization and new business models are drivers of the evolution of the IT industries. The findings of this study emphasize the significant role of IT services in the Finnish IT sector and in connection to that the ability to combine IT service skills, competences and practices with high level software skills also in the future. According to the study the Finnish IT enterprises and their customers have become increasingly dependent on global ecosystems and platforms, applications and IT services provided by global vendors. As a result, more IT decisions are made outside Finland. In addition, IT companies are facing new competition from other than IT industries bringing into market new substitutes. To respond to the new competition, IT firms seek growth by expanding beyond their traditional markets.. The changing global division of labor accentuates the need for accurate information of the IT sector but, at the same time, also makes it increasingly challenging to acquire the information needed. One of the main contributions of this study is to provide frameworks for describing the Finnish IT sector and its evolution. These frameworks help combine empirical information from various sources and make it easier to concretize the structures, volumes, relationships and interaction of both, the production and market side of the Finnish IT industry. Some frameworks provide tools to analyze the vendor field, competition and the basis for the future evolution of the IT industries. The observations of the study support the argument that static industry definitions and related classifications do not serve the information needs in dynamic industries, such as the IT industries. One of the main messages of this study is to emphasize the importance of understanding the definitions and starting points of different information sources. Simultaneously, in the structure and evolution of Finnish IT industries the number of employees has become a more valid and reliable measure than the revenue based indicators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rolling element bearings are essential components of rotating machinery. The spherical roller bearing (SRB) is one variant seeing increasing use, because it is self-aligning and can support high loads. It is becoming increasingly important to understand how the SRB responds dynamically under a variety of conditions. This doctoral dissertation introduces a computationally efficient, three-degree-of-freedom, SRB model that was developed to predict the transient dynamic behaviors of a rotor-SRB system. In the model, bearing forces and deflections were calculated as a function of contact deformation and bearing geometry parameters according to nonlinear Hertzian contact theory. The results reveal how some of the more important parameters; such as diametral clearance, the number of rollers, and osculation number; influence ultimate bearing performance. Distributed defects, such as the waviness of the inner and outer ring, and localized defects, such as inner and outer ring defects, are taken into consideration in the proposed model. Simulation results were verified with results obtained by applying the formula for the spherical roller bearing radial deflection and the commercial bearing analysis software. Following model verification, a numerical simulation was carried out successfully for a full rotor-bearing system to demonstrate the application of this newly developed SRB model in a typical real world analysis. Accuracy of the model was verified by comparing measured to predicted behaviors for equivalent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In literature CO 2 liquidization is well studied with steady state modeling. Steady state modeling gives an overview of the process but it doesn’t give information about process behavior during transients. In this master’s thesis three dynamic models of CO2 liquidization were made and tested. Models were straight multi-stage compression model and two compression liquid pumping models, one with and one without cold energy recovery. Models were made with Apros software, models were also used to verify that Apros is capable to model phase changes and over critical state of CO 2. Models were verified against compressor manufacturer’s data and simulation results presented in literature. From the models made in this thesis, straight compression model was found to be the most energy efficient and fastest to react to transients. Also Apros was found to be capable tool for dynamic liquidization modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the work is to study the flow behavior and to support the design of air cleaner by dynamic simulation.In a paper printing industry, it is necessary to monitor the quality of paper when the paper is being produced. During the production, the quality of the paper can be monitored by camera. Therefore, it is necessary to keep the camera lens clean as wood particles may fall from the paper and lie on the camera lens. In this work, the behavior of the air flow and effect of the airflow on the particles at different inlet angles are simulated. Geometries of a different inlet angles of single-channel and double-channel case were constructed using ANSYS CFD Software. All the simulations were performed in ANSYS Fluent. The simulation results of single-channel and double-channel case revealed significant differences in the behavior of the flow and the particle velocity. The main conclusion from this work are in following. 1) For the single channel case the best angle was 0 degree because in that case, the air flow can keep 60% of the particles away from the lens which would otherwise stay on lens. 2) For the double channel case, the best solution was found when the angle of the first inlet was 0 degree and the angle of second inlet was 45 degree . In that case, the airflow can keep 91% of particles away from the lens which would otherwise stay on lens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.