75 resultados para OPEN PROBLEMS IN TOPOLOGY
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Innovation nowadays is one of the key elements of counties’ competitiveness. In the face of continuous world economic changes, open innovation business model implementation allows many companies to improve and accelerate their innovation processes through collaboration. Universities as traditional sources of knowledge might be involved in such kind of collaboration. In developing countries, which are in transition towards innovation-based economy, as Russia, open innovation business model can serve as a tool to speed up this transition. The Master’s Thesis explores the implementation of open innovation model in collaboration between companies and universities in global scale and particularly in Russia. The study is qualitative and it is based on integrative analysis of literature, secondary data and results of the survey, conducted among Russian universities. In the thesis a model for implementation of open innovation into Triple Helix model is elaborated. The study also explores not very common practice of reverse-directional interaction - from industry to university. The findings of this research show a necessity of solving the identified problems in parallel with implementation of open innovation concept in university-industry collaboration.
Resumo:
The improvement of the dynamics of flexible manipulators like log cranes often requires advanced control methods. This thesis discusses the vibration problems in the cranes used in commercial forestry machines. Two control methods, adaptive filtering and semi-active damping, are presented. The adaptive filter uses a part of the lowest natural frequency of the crane as a filtering frequency. The payload estimation algorithm, filtering of control signal and algorithm for calculation of the lowest natural frequency of the crane are presented. The semi-active damping method is basedon pressure feedback. The pressure vibration, scaled with suitable gain, is added to the control signal of the valve of the lift cylinder to suppress vibrations. The adaptive filter cuts off high frequency impulses coming from the operatorand semi-active damping suppresses the crane?s oscillation, which is often caused by some external disturbance. In field tests performed on the crane, a correctly tuned (25 % tuning) adaptive filter reduced pressure vibration by 14-17 % and semi-active damping correspondingly by 21-43%. Applying of these methods require auxiliary transducers, installed in specific points in the crane, and electronically controlled directional control valves.
Resumo:
Elektroniset finanssipalvelut, erityisesti Internetin kautta käytettynä, on kasvava alue. Elektronisten finanssipalveluiden tarjoajan tulee pystyä tarjoamaan laaja käytettävyys kaikkien kanavien kautta. Laajan käytettävyyden avulla asiakas voi valita haluamansa kanavan haluamanaan aikana. Palveluntarjoajalla tulee olla joustava arkkitehtuuri pystyäkseen tukemaan asiakkaiden muuttuvia vaatimuksia. Joustavalla arkkitehtuurilla päätelaitteeseen mukautuminen on mahdollista ja näin palveluntarjoaja pystyy tarjoamaan tuen monille eri päätelaitteille ja teknologioille helposti ja nopeasti. Diplomityö keskittyy tutkimaan mahdollisuutta monen kanavan tukeen ja päätelaitteeseen mukautumista Nordean tulevassa finanssiportaaliratkaisussa. Tämä pitäisi olla mahdollista uuden arkkitehtuurin kanssa, jonka TietoEnator on toteuttanut yhteistyössä Nordean kanssa. Sivujen rakenteen uudelleenjärjestelyillä saatiin hyviä tuloksia. Nykyisestä arkkitehtuurissa löydettiin myös puutteita ja jäljelle jäi avoimia kysymyksiä, jotka kirjattiin ylös. On selvästi nähtävissä, että tehokas päätelaitteeseen mukautuminen ja tuki monelle kanavalle tuo hyötyjä sekä pankille että asiakkaalle.
Resumo:
Biotechnology has been recognized as the key strategic technology for industrial growth. The industry is heavily dependent on basic research. Finland continues to rank in the top 10 of Europe's most innovative countries in terms of tax-policy, education system, infrastructure and the number of patents issued. Regardless of the excellent statistical results, the output of this innovativeness is below acceptable. Research on the issues hindering the output creation has already been done and the identifiable weaknesses in the Finland's National Innovation system are the non-existent growth of entrepreneurship and the missing internationalization. Finland is proven to have all the enablers of the innovation policy tools, but is lacking the incentives and rewards to push the enablers, such as knowledge and human capital, forward. Science Parks are the biggest operator in research institutes in the Finnish Science and Technology system. They exist with the purpose of speeding up the commercialization process of biotechnology innovations which usually include technological uncertainty, technical inexperience, business inexperience and high technology cost. Innovation management only internally is a rather historic approach, current trend drives towards open innovation model with strong triple helix linkages. The evident problems in the innovation management within the biotechnology industry are examined through a case study approach including analysis of the semi-structured interviews which included biotechnology and business expertise from Turku School of Economics. The results from the interviews supported the theoretical implications as well as conclusions derived from the pilot survey, which focused on the companies inside Turku Science Park network. One major issue that the Finland's National innovation system is struggling with is the fact that it is technology driven, not business pulled. Another problem is the university evaluation scale which focuses more on number of graduates and short-term factors, when it should put more emphasis on the cooperation success in the long-term, such as the triple helix connections with interaction and knowledge distribution. The results of this thesis indicated that there is indeed requirement for some structural changes in the Finland's National innovation system and innovation policy in order to generate successful biotechnology companies and innovation output. There is lack of joint output and scales of success, lack of people with experience, lack of language skills, lack of business knowledge and lack of growth companies.
Resumo:
Statistical analyses of measurements that can be described by statistical models are of essence in astronomy and in scientific inquiry in general. The sensitivity of such analyses, modelling approaches, and the consequent predictions, is sometimes highly dependent on the exact techniques applied, and improvements therein can result in significantly better understanding of the observed system of interest. Particularly, optimising the sensitivity of statistical techniques in detecting the faint signatures of low-mass planets orbiting the nearby stars is, together with improvements in instrumentation, essential in estimating the properties of the population of such planets, and in the race to detect Earth-analogs, i.e. planets that could support liquid water and, perhaps, life on their surfaces. We review the developments in Bayesian statistical techniques applicable to detections planets orbiting nearby stars and astronomical data analysis problems in general. We also discuss these techniques and demonstrate their usefulness by using various examples and detailed descriptions of the respective mathematics involved. We demonstrate the practical aspects of Bayesian statistical techniques by describing several algorithms and numerical techniques, as well as theoretical constructions, in the estimation of model parameters and in hypothesis testing. We also apply these algorithms to Doppler measurements of nearby stars to show how they can be used in practice to obtain as much information from the noisy data as possible. Bayesian statistical techniques are powerful tools in analysing and interpreting noisy data and should be preferred in practice whenever computational limitations are not too restrictive.
Resumo:
The purpose of this thesis is to study, investigate and compare usability of open source cms. The thesis examines and compares usability aspect of some open source cms. The research is divided into two complementary parts –theoretical part and analytical part. The theoretical part mainly describes open source web content management systems, usability and the evaluation methods. The analytical part is to compare and analyze the results found from the empirical research. Heuristic evaluation method was used to measure usability problems in the interfaces. The study is fairly limited in scope; six tasks were designed and implemented in each interface for discovering defects in the interfaces. Usability problems were rated according to their level of severity. Time it took by each task, level of problem’s severity and type of heuristics violated will be recorded, analyzed and compared. The results of this study indicate that the comparing systems provide usable interfaces, and WordPress is recognized as the most usable system.
Resumo:
The aim of the present dissertation was to capture a picture of child and adolescent mental health in Romania, in the context of almost 25 years of changes following the Romanian Revolution of December ’89. A three-part study was carried out in order to provide consistent answers to the pre-defined objectives: to appraise the development of child and adolescent mental health services in Romania (Part I), to explore the characteristics of clinically-referred patients in a Romanian child and adolescent psychiatry department (Part II), to examine the children’s mental health state and its connections with family functioning and associated risk factors (Part III). A multi-method research approach was used, comprising one qualitative analysis and two quantitative research studies. Part I consisted of a comparative qualitative analysis of the answers given by 10 mental health professionals at a 12-questions open ended interview about the current situation in child and adolescent mental health in Romania, on three topics: changes, challenges, solutions. Part II involved a descriptive quantitative analysis of certain variables (e.g. age, gender, primary diagnosis, co-morbidities, time of hospitalization) conducted on the patients who had been admitted to the Child and Adolescent Psychiatry Department at “Prof. Dr. Alexandru Obregia” Psychiatry Hospital, Bucharest in 1991 and in 2013. Part III was conducted on 342 subjects enrolled in two clinical groups and one school group, this study being performed through a cross-sectional analysis on multi-informant child and adolescent mental health problems and competencies (CBCL, YSR, SDQ P, SDQ SR) and their interrelation with household information (HQ) and family functioning (FAD). Outlining the results it can be stated that: 1) The CAMH System in Romania is definitely set on the path of reorganization, including a higher involvement of beneficiaries and of the community. 2) The characteristics of the admitted patients have changed significantly during the last almost 25 years since `89 December Revolution, under the influence of word wide trends in child psychiatry and of administrative aspects of the mental health network in Romania. 3) The rates of main diagnoses and co-morbidities confirm the reports in literature, with Autism Spectrum Disorder being the most frequent childhood psychiatric disorders in this study. 4) The children’s mental health problems in the psychiatry group are comparable to those reported for other clinical populations. 5) Significant score differences were observed according to various household features and also meaningful associations between a child’s clinical status and different aspects of family functioning. The Romanian Child and Adolescent Psychiatry has started to adopt the norms and standards of the European Union. In the 25 years that have elapsed after the 1989 Revolution, many changes have occurred in Romanian CAMH, but many unresolved issues have also risen. Therefore, the major contribution of this thesis is that it provides a coherent and updated overview of the present-day situation from three different perspectives- those of mental healthcare professionals, the one observed in clinical patients and the one reported by children’s families.
Resumo:
Globalization and interconnectedness in the worldwide sphere have changed the existing and prevailing modus operandi of organizations around the globe and have challenged existing practices along with the business as usual mindset. There are no rules in terms of creating a competitive advantage and positioning within an unstable, constantly changing and volatile globalized business environment. The financial industry, the locomotive or the flagship industry of global economy, especially, within the aftermath of the financial crisis, has reached a certain point trying to recover and redefine its strategic orientation and positioning within the global business arena. Innovation has always been a trend and a buzzword and by many has been considered as the ultimate answer to any kind of problem. The mantra Innovate or Die has been prevailing in any organizational entity in a, sometimes, ruthless endeavour to develop cutting-edge products and services and capture a landmark position in the market. The emerging shift from a closed to an open innovation paradigm has been considered as new operational mechanism within the management and leadership of the company of the future. To that respect, open innovation has been experiencing a tremendous growth research trajectory by putting forward a new way of exchanging and using surplus knowledge in order to sustain innovation within organizations and in the level of industry. In the abovementioned reality, there seems to be something missing: the human element. This research, by going beyond the traditional narratives for open innovation, aims at making an innovative theoretical and managerial contribution developed and grounded on the on-going discussion regarding the individual and organizational barriers to open innovation within the financial industry. By functioning across disciplines and researching out to primary data, it debunks the myth that open innovation is solely a knowledge inflow and outflow mechanism and sheds light to the understanding on the why and the how organizational open innovation works by enlightening the broader dynamics and underlying principles of this fascinating paradigm. Little attention has been given to the role of the human element, the foundational pre-requisite of trust encapsulated within the precise and fundamental nature of organizing for open innovation, the organizational capabilities, the individual profiles of open innovation leaders, the definition of open innovation in the realms of the financial industry, the strategic intent of the financial industry and the need for nurturing a societal impact for human development. To that respect, this research introduces the trust-embedded approach to open innovation as a new insightful way of organizing for open innovation. It unveils the peculiarities of the corporate and individual spheres that act as a catalyst towards the creation of productive open innovation activities. The incentive of this research captures the fundamental question revolving around the need for financial institutions to recognise the importance for organizing for open innovation. The overarching question is why and how to create a corporate culture of openness in the financial industry, an organizational environment that can help open innovation excel. This research shares novel and cutting edge outcomes and propositions both under the prism of theory and practice. The trust-embedded open innovation paradigm captures the norms and narratives around the way of leading open innovation within the 21st century by cultivating a human-centricity mindset that leads to the creation of human organizations, leaving behind the dehumanization mindset currently prevailing within the financial industry.
Resumo:
The energy system of Russia is the world's fourth largest measured by installed power. The largest are that of the the United States of America, China and Japan. After 1990, the electricity consumption decreased as a result of the Russian industry crisis. The vivid economic growth during the latest few years explains the new increase in the demand for energy resources within the State. In 2005 the consumption of electricity achieved the maximum level of 1990 and continues to growth. In the 1980's, the renewal of power facilities was already very slow and practically stopped in the 1990's. At present, the energy system can be very much characterized as outdated, inefficient and uneconomic because of the old equipment, non-effective structure and large losses in the transmission lines. The aim of Russia's energy reform, which was started in 2001, is to achieve a market based energy policy by 2011. This would thus remove the significantly state-controlled monopoly in Russia's energy policy. The reform will stimulateto decrease losses, improve the energy system and employ energy-saving technologies. The Russian energy system today is still based on the use of fossil fuels, and it almost totally ignores the efficient use of renewable sources such as wind, solar, small hydro and biomass, despite of their significant resources in Russia. The main target of this project is to consider opportunities to apply renewable energy production in the North-West Federal Region of Russia to partly solve the above mentioned problems in the energy system.
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
Abstract The research problem in the thesis deals with improving the responsiveness and efficiency of logistics service processes between a supplier and its customers. The improvement can be sought by customizing the services and increasing the coordination of activities between the different parties in the supply chain. It is argued that to achieve coordination the parties have to have connections on several levels. In the framework employed in this research, three contexts are conceptualized at which the linkages can be planned: 1) the service policy context, 2) the process coordination context, and 3) the relationship management context. The service policy context consists of the planning methods by which a supplier analyzes its customers' logistics requirements and matches them with its own operational environment and efficiency requirements. The main conclusion related to the service policy context is that it is important to have a balanced selection of both customer-related and supplier-related factors in the analysis. This way, while the operational efficiency is planned a sufficient level of service for the most important customers is assured. This kind of policy planning involves taking multiple variables into the analysis, and there is a need to develop better tools for this purpose. Some new approaches to deal with this are presented in the thesis.The process coordination context and the relationship management context deal with the issues of how the implementation of the planned service policies can be facilitated in an inter-organizational environment. Process coordination includes typically such mechanisms as control rules, standard procedures and programs, but inhighly demanding circumstances more integrative coordination mechanisms may be necessary. In the thesis the coordination problems in third-party logistics relationship are used as an example of such an environment. Relationship management deals with issues of how separate companies organize their relationships to improve the coordination of their common processes. The main implication related to logistics planning is that by integrating further at the relationship level, companies can facilitate the use of the most efficient coordination mechanisms and thereby improve the implementation of the selected logistics service policies. In the thesis, a case of a logistics outsourcing relationship is used to demonstrate the need to address the relationship issues between the service provider andthe service buyer before the outsourcing can be done.The dissertation consists of eight research articles and a summarizing report. The principal emphasis in the articles is on the service policy planning context, which is the main theme of six articles. Coordination and relationship issues are specifically addressed in two of the papers.
Resumo:
Values and value processes are said to be needed in every organization nowadays, as the world is changing and companies have to have something to "keep it together". Organizational values, which are approvedand used by the personnel, could be the key. Every organization has values. But what is the real value of values? The greatest and most crucial challenge is the feasibility of the value process. The main point in this thesis is tostudy how organizational members at different hierarchical levels perceive values and value processes in their organizations. This includes themes such as how values are disseminated, the targets of value processing, factors that affect the process, problems that occur during the value implementation and improvements that could be made when organizational values are implemented. These subjects are studied from the perspective of organizational members (both managers and employees); individuals in the organizations. The aim is to get the insider-perspective on value processing, from multiple hierarchical levels. In this research I study three different organizations (forest industry, bank and retail cooperative) and their value processes. The data is gathered from companies interviewing personnel in the head office and at the local level. The individuals areseen as members of organizations, and the cultural aspect is topical throughout the whole study. Values and cultures are seen as the 'actuality of reality' of organizations, interpreted by organizational members. The three case companies were chosen because they represented different lines of business and they all implemented value processing differently. Sincethe emphasis in this study is at the local level, the similar size of the local units was also an important factor. Values are in 'fashion' -but what does the fashion tell us about the real corporate practices? In annual reports companies emphasize the importance and power of official values. But what is the real 'point' of values? Values are publicly respected and advertised, but still it seems that the words do not meet the deeds. There is a clear conflict between theoretical, official and substantive organizational values: in the value processing from words to real action. This contradiction in value processing is studied through individual perceptions in this study. I study the kinds of perceptions organizationalmembers have when values are processed from the head office to the local level: the official value process is studied from the individual's perspective. Value management has been studied more during the 1990's. The emphasis has usually been on managers: how they consider the values in organizations and what effects it has on the management. Recent literature has emphasized values as tools for improving company performance. The value implementation as a process has been studied through 'good' and 'bad' examples, as if one successful value process could be copied to all organizations. Each company is different with different cultures and personnel, so no all-powerful way of processing values exists. In this study, the organizational members' perceptions at different hierarchical levels are emphasized. Still, managers are also interviewed; this is done since managerial roles in value dissemination are crucial. Organizational values cannot be well disseminated without management; this has been proved in several earlier studies (e.g. Kunda 1992, Martin 1992, Parker 2000). Recent literature has not sufficiently emphasized the individual's (organizational member's) role in value processing. Organizations consist of differentindividuals with personal values, at all hierarchical levels. The aim in this study is to let the individual take the floor. Very often the value process is described starting from the value definition and ending at dissemination, and the real results are left without attention. I wish to contribute to this area. Values are published officially in annual reports etc. as a 'goal' just like profits. Still, the results/implementationof value processing is rarely followed, at least in official reports. This is a very interesting point: why do companies espouse values, if there is no real control or feedback after the processing? In this study, the personnel in three different companies is asked to give an answer. In the empirical findings, there are several results which bring new aspects to the research area of organizational values. The targets of value processing, factors effecting value processing, the management's roles and the problems in value implementation are presented through the individual's perspective. The individual's perceptions in value processing are a recurring theme throughout the whole study. A comparison between the three companies with diverse value processes makes the research complete
Resumo:
Software engineering is criticized as not being engineering or 'well-developed' science at all. Software engineers seem not to know exactly how long their projects will last, what they will cost, and will the software work properly after release. Measurements have to be taken in software projects to improve this situation. It is of limited use to only collect metrics afterwards. The values of the relevant metrics have to be predicted, too. The predictions (i.e. estimates) form the basis for proper project management. One of the most painful problems in software projects is effort estimation. It has a clear and central effect on other project attributes like cost and schedule, and to product attributes like size and quality. Effort estimation can be used for several purposes. In this thesis only the effort estimation in software projects for project management purposes is discussed. There is a short introduction to the measurement issues, and some metrics relevantin estimation context are presented. Effort estimation methods are covered quite broadly. The main new contribution in this thesis is the new estimation model that has been created. It takes use of the basic concepts of Function Point Analysis, but avoids the problems and pitfalls found in the method. It is relativelyeasy to use and learn. Effort estimation accuracy has significantly improved after taking this model into use. A major innovation related to the new estimationmodel is the identified need for hierarchical software size measurement. The author of this thesis has developed a three level solution for the estimation model. All currently used size metrics are static in nature, but this new proposed metric is dynamic. It takes use of the increased understanding of the nature of the work as specification and design work proceeds. It thus 'grows up' along with software projects. The effort estimation model development is not possible without gathering and analyzing history data. However, there are many problems with data in software engineering. A major roadblock is the amount and quality of data available. This thesis shows some useful techniques that have been successful in gathering and analyzing the data needed. An estimation process is needed to ensure that methods are used in a proper way, estimates are stored, reported and analyzed properly, and they are used for project management activities. A higher mechanism called measurement framework is also introduced shortly. The purpose of the framework is to define and maintain a measurement or estimationprocess. Without a proper framework, the estimation capability of an organization declines. It requires effort even to maintain an achieved level of estimationaccuracy. Estimation results in several successive releases are analyzed. It isclearly seen that the new estimation model works and the estimation improvementactions have been successful. The calibration of the hierarchical model is a critical activity. An example is shown to shed more light on the calibration and the model itself. There are also remarks about the sensitivity of the model. Finally, an example of usage is shown.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.