21 resultados para Space Geometry. Manipulatives. Distance Calculation

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes different possibilities of protection and control system improvement of primary distribution substation. The status of condition and main problems of power networks from reliability point of view in Russia are described. This work studies technologies used today in Russia for protection of distribution networks with their disadvantages. Majority of medium voltage networks (6-35 kV) has isolated network point. There is still no any protection available on the market which allows to estimate distance to fault in case of earth fault. The thesis analyses methods of earth fault distance calculation. On the basis of computer simulation the influence of various factors on calculation accuracy is studied. The practical implementation of the method presupposes usage of digital relay. Application of digital relay is accompanied by numerous opportunities which are described in this work. Also advantages of system implemented on the basis of IEC 61850 standard are examined. Finally, suitability of modern digital relays from GOST standard point of view is analyzed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tässä diplomityössä tutkitaan dispariteettikartan laskennan tehostamista interpoloimalla. Kolmiomittausta käyttämällä stereokuvasta muodostetaan ensin harva dispariteettikartta, jonka jälkeen koko kuvan kattava dispariteettikartta muodostetaan interpoloimalla. Kolmiomittausta varten täytyy tietää samaa reaalimaailman pistettä vastaavat kuvapisteet molemmissa kameroissa. Huolimatta siitä, että vastaavien pisteiden hakualue voidaan pienentää kahdesta ulottuvuudesta yhteen ulottuvuuteen käyttämällä esimerkiksi epipolaarista geometriaa, on laskennallisesti tehokkaampaa määrittää osa dispariteetikartasta interpoloimalla, kuin etsiä vastaavia kuvapisteitä stereokuvista. Myöskin johtuen stereonäköjärjestelmän kameroiden välisestä etäisyydestä, kaikki kuvien pisteet eivät löydy toisesta kuvasta. Näin ollen on mahdotonta määrittää koko kuvan kattavaa dispariteettikartaa pelkästään vastaavista pisteistä. Vastaavien pisteiden etsimiseen tässä työssä käytetään dynaamista ohjelmointia sekä korrelaatiomenetelmää. Reaalimaailman pinnat ovat yleisesti ottaen jatkuvia, joten geometrisessä mielessä on perusteltua approksimoida kuvien esittämiä pintoja interpoloimalla. On myöskin olemassa tieteellistä näyttöä, jonkamukaan ihmisen stereonäkö interpoloi objektien pintoja.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies gray-level distance transforms, particularly the Distance Transform on Curved Space (DTOCS). The transform is produced by calculating distances on a gray-level surface. The DTOCS is improved by definingmore accurate local distances, and developing a faster transformation algorithm. The Optimal DTOCS enhances the locally Euclidean Weighted DTOCS (WDTOCS) with local distance coefficients, which minimize the maximum error from the Euclideandistance in the image plane, and produce more accurate global distance values.Convergence properties of the traditional mask operation, or sequential localtransformation, and the ordered propagation approach are analyzed, and compared to the new efficient priority pixel queue algorithm. The Route DTOCS algorithmdeveloped in this work can be used to find and visualize shortest routes between two points, or two point sets, along a varying height surface. In a digital image, there can be several paths sharing the same minimal length, and the Route DTOCS visualizes them all. A single optimal path can be extracted from the route set using a simple backtracking algorithm. A new extension of the priority pixel queue algorithm produces the nearest neighbor transform, or Voronoi or Dirichlet tessellation, simultaneously with the distance map. The transformation divides the image into regions so that each pixel belongs to the region surrounding the reference point, which is nearest according to the distance definition used. Applications and application ideas for the DTOCS and its extensions are presented, including obstacle avoidance, image compression and surface roughness evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä tutkimuksessa kehitettiin prototyyppi betonielementin dimension mittaus järjestelmästä. Tämä järjestelmä mahdollistaa kolmiulotteisen kappaleen mittauksen. Tutkimuksessa kehitettiin myös stereonäköön perustuva kappaleen mittaus. Prototyyppiä testailin ja tulokset osoittautuivat luotettaviksi. Tutkimuksessa selvitetään ja vertaillaan myös muita lähestymistapoja ja olemassa olevia järjestelmiä kappaleen kolmiuloitteiseen mittaukseen, joita Suomalaiset yhtiöt käyttävät tällä alalla.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To predict the capacity of the structure or the point which is followed by instability, calculation of the critical crack size is important. Structures usually contain several cracks but not necessarily all of these cracks lead to failure or reach the critical size. So, defining the harmful cracks or the crack size which is the most leading one to failure provides criteria for structure’s capacity at elevated temperature. The scope of this thesis was to calculate fracture parameters like stress intensity factor, the J integral and plastic and ultimate capacity of the structure to estimate critical crack size for this specific structure. Several three dimensional (3D) simulations using finite element method by Ansys program and boundary element method by Frank 3D program were carried out to calculate fracture parameters and results with the aid of laboratory tests (loaddisplacement curve, the J resistance curve and yield or ultimate stress) leaded to extract critical size of the crack. Two types of the fracture which is usually affected by temperature, Elastic and Elasti-Plastic fractures were simulated by performing several linear elastic and nonlinear elastic analyses. Geometry details of the weldment; flank angle and toe radius were also studied independently to estimate the location of crack initiation and simulate stress field in early stages of crack extension in structure. In this work also overview of the structure’s capacity in room temperature (20 ºC) was studied. Comparison of the results in different temperature (20 ºC and -40 ºC) provides a threshold of the structure’s behavior within the defined range.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this master’s thesis is to investigate the loss behavior of three-level ANPC inverter and compare it with conventional NPC inverter. The both inverters are controlled with mature space vector modulation strategy. In order to provide the comparison both accurate and detailed enough NPC and ANPC simulation models should be obtained. The similar control model of SVM is utilized for both NPC and ANPC inverter models. The principles of control algorithms, the structure and description of models are clarified. The power loss calculation model is based on practical calculation approaches with certain assumptions. The comparison between NPC and ANPC topologies is presented based on results obtained for each semiconductor device, their switching and conduction losses and efficiency of the inverters. Alternative switching states of ANPC topology allow distributing losses among the switches more evenly, than in NPC inverter. Obviously, the losses of a switching device depend on its position in the topology. Losses distribution among the components in ANPC topology allows reducing the stress on certain switches, thus losses are equally distributed among the semiconductors, however the efficiency of the inverters is the same. As a new contribution to earlier studies, the obtained models of SVM control, NPC and ANPC inverters have been built. Thus, this thesis can be used in further more complicated modelling of full-power converters for modern multi-megawatt wind energy conversion systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä työssä tutkittiin eri mitoitusmenetelmien soveltuvuutta hitsattujen rakenteiden vä-symislaskennassa. Käytetyt menetelmät olivat rakenteellinen jännityksen menetelmä, te-hollisen lovijännityksen menetelmä ja murtumismekaniikka. Lisäksi rakenteellisen jänni-tyksen määrittämiseksi käytettiin kolmea eri menetelmää. Menetelmät olivat pintaa pitkin ekstrapolointi, paksuuden yli linearisointi ja Dongin menetelmä. Väsymiskestävyys määritettiin kahdelle hitsiliitoksen yksityiskohdalle. Laskenta tehtiin käyttäen elementtimenetelmää rakenteen 3D-mallille. Tutkittavasta aggregaattirungosta oli olemassa FE-malli mutta alimallinnustekniikkaa hyödyntämällä pystyttiin yksityiskohtai-semmin tutkimaan vain pientä osaa koko rungon mallista. Rakenteellisen jännityksen menetelmä perustuu nimellisiin jännityksiin. Kyseinen mene-telmä ei vaadi geometrian muokkausta. Yleensä rakenteellisen jännityksen menetelmää käytetään hitsin rajaviivan väsymislaskennassa, mutta joissain tapauksissa sitä on käytetty juuren puolen laskennassa. Tässä työssä rakenteellisen jännityksen menetelmää käytettiin myös juuren puolen tutkimisessa. Tehollista lovijännitystä tutkitaan mallintamalla 1 mm fiktiiviset pyöristykset sekä rajaviivalle että juuren puolelle. Murtumismekaniikan so-veltuvuutta tutkittiin käyttämällä Franc2D särön kasvun simulointiohjelmaa. Väsymislaskennan tulokset eivät merkittävästi poikkea eri laskentamenetelmien välillä. Ainoastaan rakenteellisen jännityksen Dongin menetelmällä saadaan poikkeavia tuloksia. Tämä johtuu pääasiassa siitä, että menetelmän laskentaetäisyydestä ei ole tietoa. Raken-teellisen jännityksen menetelmällä, tehollisen lovijännityksen menetelmällä ja murtumis-mekaniikalla saadaan samansuuntaiset tulokset. Suurin ero menetelmien välillä on mal-linnuksen ja laskennan vaatima työmäärä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heat transfer effectiveness in nuclear rod bundles is of great importance to nuclear reactor safety and economics. An important design parameter is the Critical Heat Flux (CHF), which limits the transferred heat from the fuel to the coolant. The CHF is determined by flow behaviour, especially the turbulence created inside the fuel rod bundle. Adiabatic experiments can be used to characterize the flow behaviour separately from the heat transfer phenomena in diabatic flow. To enhance the turbulence, mixing vanes are attached to spacer grids, which hold the rods in place. The vanes either make the flow swirl around a single sub-channel or induce cross-mixing between adjacent sub-channels. In adiabatic two-phase conditions an important phenomenon that can be investigated is the effect of the spacer on canceling the lift force, which collects the small bubbles to the rod surfaces leading to decreased CHF in diabatic conditions and thus limits the reactor power. Computational Fluid Dynamics (CFD) can be used to simulate the flow numerically and to test how different spacer configurations affect the flow. Experimental data is needed to validate and verify the used CFD models. Especially the modeling of turbulence is challenging even for single-phase flow inside the complex sub-channel geometry. In two-phase flow other factors such as bubble dynamics further complicate the modeling. To investigate the spacer grid effect on two-phase flow, and to provide further experimental data for CFD validation, a series of experiments was run on an adiabatic sub-channel flow loop using a duct-type spacer grid with different configurations. Utilizing the wire-mesh sensor technology, the facility gives high resolution experimental data in both time and space. The experimental results indicate that the duct-type spacer grid is less effective in canceling the lift force effect than the egg-crate type spacer tested earlier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Keyhole welding, meaning that the laser beam forms a vapour cavity inside the steel, is one of the two types of laser welding processes and currently it is used in few industrial applications. Modern high power solid state lasers are becoming more used generally, but not all process fundamentals and phenomena of the process are well known and understanding of these helps to improve quality of final products. This study concentrates on the process fundamentals and the behaviour of the keyhole welding process by the means of real time high speed x-ray videography. One of the problem areas in laser welding has been mixing of the filler wire into the weld; the phenomena are explained and also one possible solution for this problem is presented in this study. The argument of this thesis is that the keyhole laser welding process has three keyhole modes that behave differently. These modes are trap, cylinder and kaleidoscope. Two of these have sub-modes, in which the keyhole behaves similarly but the molten pool changes behaviour and geometry of the resulting weld is different. X-ray videography was used to visualize the actual keyhole side view profile during the welding process. Several methods were applied to analyse and compile high speed x-ray video data to achieve a clearer image of the keyhole side view. Averaging was used to measure the keyhole side view outline, which was used to reconstruct a 3D-model of the actual keyhole. This 3D-model was taken as basis for calculation of the vapour volume inside of the keyhole for each laser parameter combination and joint geometry. Four different joint geometries were tested, partial penetration bead on plate and I-butt joint and full penetration bead on plate and I-butt joint. The comparison was performed with selected pairs and also compared all combinations together.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Various researches in the field of econophysics has shown that fluid flow have analogous phenomena in financial market behavior, the typical parallelism being delivered between energy in fluids and information on markets. However, the geometry of the manifold on which market dynamics act out their dynamics (corporate space) is not yet known. In this thesis, utilizing a Seven year time series of prices of stocks used to compute S&P500 index on the New York Stock Exchange, we have created local chart to the corporate space with the goal of finding standing waves and other soliton like patterns in the behavior of stock price deviations from the S&P500 index. By first calculating the correlation matrix of normalized stock price deviations from the S&P500 index, we have performed a local singular value decomposition over a set of four different time windows as guides to the nature of patterns that may emerge. I turns out that in almost all cases, each singular vector is essentially determined by relatively small set of companies with big positive or negative weights on that singular vector. Over particular time windows, sometimes these weights are strongly correlated with at least one industrial sector and certain sectors are more prone to fast dynamics whereas others have longer standing waves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary