11 resultados para Distance hereditary graphs

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The worlds’ population is increasing and cities have become more crowded with people and vehicles. Communities in the fringe of metropolitans’ increase the traffic done with private cars, but also increase the need for public transportation. People have typically needs traveling to work located in city centers during the morning time, and return to suburbs in the afternoon or evening. Rail based passenger transport is environmentally friendly transport mode with high capacity to transport large volume of people. Railways have been regulated markets with national incumbent having monopoly position. Opening the market for competition is believed to have a positive effect by increasing the efficiency of the industry. National passenger railway market is opened for competition only in few countries, where as international traffic in EU countries was deregulated in 2010. The objective of this study is to examine the passenger railway market of three North European countries, Sweden, Denmark and Estonia. The interest was also to get an understanding of the current situation and how the deregulation has proceeded. Theory of deregulation is unfolded with literature analyses and empirical part of the study is constructed from two parts. Customer satisfaction survey was chosen as a method to collect real life experiences from the passengers and measure their knowledge of the market situation and possible changes appeared. Interviews of experts from the industry and labor unions give more insights and able better understanding for example of social consequences caused from opening the market for competition. Expert interviews were conducted by using semi-structured theme interview. Based on the results of this study, deregulation has proceeded quite differently in the three countries researched. Sweden is the most advanced country, where the passenger railway market is open for new entrants. Denmark and Estonia are lagging behind. Opening the market is considered positive among passengers and most of the experts interviewed. Common for the interviews were the labour unions negative perspective concerning deregulation. Despite the fact deregulation is considered positive among the respondents of the customer satisfaction survey, they could not name railway undertakings operating in their country. Generally respondents were satisfied with the commuter trains. Ticket price, punctuality of trains and itinerary affect the most to customer satisfaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates the influence of cultural distance on entrepreneurs’ negotiation behaviour. For this purpose, Turku was chosen as the unit of analysis due to the exponential demographic change experienced during the last two decades that has derived in a more diversified local environment. The research aim set for this study was to identify to what extent entrepreneurs face cultural distance, how cultural distance influences the entrepreneur’s negotiation behaviour and how can it be addressed in order to turn dissimilarities into opportunities. This study presented the relation and apparent dichotomy of cultural distance and global culture, including the component of diversity. The impact of cultural distance in the entrepreneurial mindset and its consequent effect in negotiation behaviour was presented too. Addressing questions about the way individuals perceive, behave and interact allowed the use of interviews for this qualitative research study. In the empirical part of this study it was found that negotiation behaviour differed in terms of how congenial entrepreneurs felt when managing cultural distance, encompassing their performance. It was also acknowledged that after time and effort, some of the personal traits were enhanced while others reduced, allowing for more flexibility and adaptation. Furthermore, depending on the level of trust and shared interests, entrepreneurs determined their attitudinal approach, being adaptive or reactive subject to situational aspects. Additionally, it was found that the acquisition of cultural savvy not necessarily conveyed to more creativity. This experiential learning capability led to the proposition of new ways of behaviour. Likewise, it was proposed that growing cultural intelligence bridge distances, reducing mistrusts and misunderstandings. The capability of building more collaborative relationships allows entrepreneurs to see cultural distance as a cultural perspective instead of as a threat. Therefore it was recommended to focus on proximity rather than distance to better identify and exploit untapped opportunities and better perform when negotiating in whichever cultural conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hammaslääketieteessä käytetettävien komposiittien valonläpäisevyys vaihtelee. Samoin LED-valokovettimet eroavat toisistaan valotehonsa ja muotoilunsa perusteella. On yleisesti tiedossa, että valokovettimesta tulevan valon intensiteetti pinta-alayksikköä kohden heikkenee, kun kovettimen etäisyys kasvaa. Toisaalta ei ole tiedossa, miten valokovetettavan kohteen ja valokovettimen kärjen väliin sijoitettu materiaali tarkalleenottaen vaikuttaa valon intensiteettiin eri etäisyyksiä käytettäessä. Tämän tutkimuksen tarkoituksena on selvittää, miten valokovetettavan kohteen ja valokovettimen kärjen väliin asetettava etukäteen polymerisoitu materiaali vaikuttaa valon intensiteettiin eri etäisyyksillä. Tutkimus suoritettiin käyttämällä kahta eri valokovetinta. Jotta etäisyyden vaikutusta valotustehoon voitiin demonstroida, vaihdettiin kovettimen etäisyyttä sensorista 0,2,4,6,8,10mm välillä. Valotehot rekisteröitiin MARC resin calibrator -laitteella. Sensorin ja valokovettimen kärjen väliin asetettavat erilaiset komposiittilevyt olivat valmiiksi kovetettuja,1mm paksuisia, filleripitoisuuksiltaan neljää erilaista muovia. Valotehot rekisteröitiin jokaiselta etäisyydeltä komposiitin ollessa sensorin päällä. Rinnakkaisesti verrattiin myös etäisyyden vaikutusta valotehoon ilman esikovetettua materiaalia kovettimen kärjen ja valoa mittaavan sensorin välissä. Vertailun suorittamiseksi laskettiin intensiteettisuhdeluku muovillisen ja muovittoman arvon välillä aina tietyllä etäisyydellä Valokovettimen kärjen etäisyyden kasvattaminen sensorista (eli valokovetettavasta kohteesta) odotusten mukaisesti pienensi valotehoa. Laittamalla sensorin ja kovettimen väliin komposiittilevy, valoteho pieneni odotetusti vielä enemmän. Tutkittaessa intensiteettisuhdetta (valoteho muovin kanssa : valoteho ilman muovia) kuitenkin huomattiin, että 4-6mm:n kohdalla suhdeluku oli suurempi kuin 0,2,8 ja 10mm kohdalla. Johtopäätöksenä oli, että suurin mahdollinen valokovetusteho saavutetan laittamalla kovetuskärki mahdollisimman lähelle kohdetta. Jos valokovetettavan kohteen ja valokovettimen kärjen välissä oli kiinteä komposiittipalanen, suurin mahdollinen valokovetusteho kohteeseen saavutetaan edelleen laittamalla kovetuskärki kiinni muoviin. Jos etäisyyttä muovin pinnasta sen sijaan kasvatettiin, valokovetusteho ei laskenutkaan niin nopeasti kuin oli odotettu. Tämä voi liittyä siihen, että tehokkaan valokeilan halkaisijan koko on suurempi verrattuna komposiitin sekä sensorin halkaisian kokoon. Toiseksi on arvioitu, että resiinikomposiitin täyteaineet voisivat fokusoida läpi kulkevaa valoa sensoriin. Se, pitääkö tämä ilmiö paikkansa, vaatii kuitenkin enemmän tutkimusta