933 resultados para non-trivial data structures
Resumo:
Puhelinmuistio on yksi matkapuhelimen käytetyimmistä ominaisuuksista. Puhelinmuistion tulee siksi olla kaikissa tilanteissa mahdollisimman nopeasti käytettävissä. Tämä edellyttää puhelinmuistiopalvelimelta tehokkaita tietorakenteita ja lajittelualgoritmeja. Nokian matkapuhelimissa puhelinmuistiopalvelin käyttää hakurakenteena järjestettyjä taulukoita. Työn tavoitteena oli kehittää puhelinmuistiopalvelimen hakutaulukoiden lajittelu mahdollisimman nopeaksi. Useita eri lajittelualgoritmeja vertailtiin ja niiden suoritusaikoja analysoitiin eri tilanteissa. Insertionsort-lajittelualgoritmin todettiin olevan nopein algoritmi lähes järjestyksessä olevien taulukoiden lajitteluun. Analyysin perusteella Quicksort-algoritmi lajittelee nopeimmin satunnaisessa järjestyksessä olevat taulukot. Quicksort-insertionsort –hybridialgoritmin havaittiin olevan paras lajittelualgoritmi puhelinmuistion lajitteluun. Sopivalla parametroinnilla tämä algoritmi on nopea satunnaisessa järjestyksessä olevalle aineistolle. Se kykenee hyödyntämään lajiteltavassa aineistossa valmiina olevaa järjestystä. Algoritmi ei kasvata merkittävästi muistinkulutusta. Uuden algoritmin ansiosta hakutaulukoiden lajittelu nopeutuu parhaimmillaan useita kymmeniä prosentteja.
Resumo:
Conservation laws in physics are numerical invariants of the dynamics of a system. In cellular automata (CA), a similar concept has already been defined and studied. To each local pattern of cell states a real value is associated, interpreted as the “energy” (or “mass”, or . . . ) of that pattern.The overall “energy” of a configuration is simply the sum of the energy of the local patterns appearing on different positions in the configuration. We have a conservation law for that energy, if the total energy of each configuration remains constant during the evolution of the CA. For a given conservation law, it is desirable to find microscopic explanations for the dynamics of the conserved energy in terms of flows of energy from one region toward another. Often, it happens that the energy values are from non-negative integers, and are interpreted as the number of “particles” distributed on a configuration. In such cases, it is conjectured that one can always provide a microscopic explanation for the conservation laws by prescribing rules for the local movement of the particles. The onedimensional case has already been solved by Fuk´s and Pivato. We extend this to two-dimensional cellular automata with radius-0,5 neighborhood on the square lattice. We then consider conservation laws in which the energy values are chosen from a commutative group or semigroup. In this case, the class of all conservation laws for a CA form a partially ordered hierarchy. We study the structure of this hierarchy and prove some basic facts about it. Although the local properties of this hierarchy (at least in the group-valued case) are tractable, its global properties turn out to be algorithmically inaccessible. In particular, we prove that it is undecidable whether this hierarchy is trivial (i.e., if the CA has any non-trivial conservation law at all) or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. We show that positively expansive CA do not have non-trivial conservation laws. We also investigate a curious relationship between conservation laws and invariant Gibbs measures in reversible and surjective CA. Gibbs measures are known to coincide with the equilibrium states of a lattice system defined in terms of a Hamiltonian. For reversible cellular automata, each conserved quantity may play the role of a Hamiltonian, and provides a Gibbs measure (or a set of Gibbs measures, in case of phase multiplicity) that is invariant. Conversely, every invariant Gibbs measure provides a conservation law for the CA. For surjective CA, the former statement also follows (in a slightly different form) from the variational characterization of the Gibbs measures. For one-dimensional surjective CA, we show that each invariant Gibbs measure provides a conservation law. We also prove that surjective CA almost surely preserve the average information content per cell with respect to any probability measure.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
Hormone-dependent diseases, e.g. cancers, rank high in mortality in the modern world, and thus, there is an urgent need for new drugs to treat these diseases. Although the diseases are clearly hormone-dependent, changes in circulating hormone concentrations do not explain all the pathological processes observed in the diseased tissues. A more inclusive explanation is provided by intracrinology – a regulation of hormone concentrations at the target tissue level. This is mediated by the expression of a pattern of steroid-activating and -inactivating enzymes in steroid target tissues, thus enabling a concentration gradient between the blood circulation and the tissue. Hydroxysteroid (17beta) dehydrogenases (HSD17Bs) form a family of enzymes that catalyze the conversion between low active 17-ketosteroids and highly active 17beta-hydroxysteroids. HSD17B1 converts low active estrogen (E1) to highly active estradiol (E2) with high catalytic efficiency, and altered HSD17B1 expression has been associated with several hormone-dependent diseases, including breast cancer, endometriosis, endometrial hyperplasia and cancer, and ovarian epithelial cancer. Because of its putative role in E2 biosynthesis in ovaries and peripheral target tissues, HSD17B1 is considered to be a promising drug target for estrogen-dependent diseases. A few studies have indicated that the enzyme also has androgenic activity, but they have been ignored. In the present study, transgenic mice overexpressing human HSD17B1 (HSD17B1TG mice) were used to study the effects of the enzyme in vivo. Firstly, the substrate specificity of human HSD17B1 was determined in vivo. The results indicated that human HSD17B1 has significant androgenic activity in female mice in vivo, which resulted in increased fetal testosterone concentration and female disorder of sexual development appearing as masculinized phenotype (increased anogenital distance, lack of nipples, lack of vaginal opening, combination of vagina with urethra, enlarged Wolffian duct remnants in the mesovarium and enlarged female prostate). Fetal androgen exposure has been linked to polycystic ovary syndrome (PCOS) and metabolic syndrome during adulthood in experimental animals and humans, but the genes involved in PCOS are largely unknown. A putative mechanism to accumulate androgens during fetal life by HSD17B1 overexpression was shown in the present study. Furthermore, as a result of prenatal androgen exposure locally in the ovaries, HSD17B1TG females developed ovarian benign serous cystadenomas in adulthood. These benign lesions are precursors of low-grade ovarian serous tumors. Ovarian cancer ranks fifth in mortality of all female cancers in Finland, and most of the ovarian cancers arise from the surface epithelium. The formation of the lesions was prevented by prenatal antiandrogen treatment and by transplanting wild type (WT) ovaries prepubertally into HSD17B1TG females. The results obtained in our non-clinical TG mouse model, together with a literature analysis, suggest that HSD17B1 has a role in ovarian epithelial carcinogenesis, and especially in the development of serous tumors. The role of androgens in ovarian carcinogenesis is considered controversial, but the present study provides further evidence for the androgen hypothesis. Moreover, it directly links HSD17B1-induced prenatal androgen exposure to ovarian epithelial carcinogenesis in mice. As expected, significant estrogenic activity was also detected for human HSD17B1. HSD17B1TG mice had enhanced peripheral conversion of E1 to E2 in a variety of target tissues, including the uterus. Furthermore, this activity was significantly decreased by treatments with specific HSD17B1 inhibitors. As a result, several estrogen-dependent disorders were found in HSD17B1TG females. Here we report that HSD17B1TG mice invariably developed endometrial hyperplasia and failed to ovulate in adulthood. As in humans, endometrial hyperplasia in HSD17B1TG females was reversible upon ovulation induction, triggering a rise in circulating progesterone levels, and in response to exogenous progestins. Remarkably, treatment with a HSD17B1 inhibitor failed to restore ovulation, yet completely reversed the hyperplastic morphology of epithelial cells in the glandular compartment. We also demonstrate that HSD17B1 is expressed in normal human endometrium, hyperplasia, and cancer. Collectively, our non-clinical data and literature analysis suggest that HSD17B1 inhibition could be one of several possible approaches to decrease endometrial estrogen production in endometrial hyperplasia and cancer. HSD17B1 expression has been found in bones of humans and rats. The non-clinical data in the present study suggest that human HSD17B1 is likely to have an important role in the regulation of bone formation, strength and length during reproductive years in female mice. Bone density in HSD17B1TG females was highly increased in femurs, but in lesser amounts also in tibias. Especially the tibia growth plate, but not other regions of bone, was susceptible to respond to HSD17B1 inhibition by increasing bone length, whereas the inhibitors did not affect bone density. Therefore, HSD17B1 inhibitors could be safer than aromatase inhibitors in regard to bone in the treatment of breast cancer and endometriosis. Furthermore, diseases related to improper growth, are a promising new indication for HSD17B1 inhibitors.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
In this thesis the main objective is to examine and model configuration system and related processes. When and where configuration information is created in product development process and how it is utilized in order-delivery process? These two processes are the essential part of the whole configuration system from the information point of view. Empirical part of the work was done as a constructive research inside a company that follows a mass customization approach. Data models and documentation are created for different development stages of the configuration system. A base data model already existed for new structures and relations between these structures. This model was used as the basis for the later data modeling work. Data models include different data structures, their key objects and attributes, and relations between. Representation of configuration rules for the to-be configuration system was defined as one of the key focus point. Further, it is examined how the customer needs and requirements information can be integrated into the product development process. Requirements hierarchy and classification system is presented. It is shown how individual requirement specifications can be connected for physical design structure via features by developing the existing base data model further.
Resumo:
The present article shows that there are consistent and decidable many- valued systems of propositional logic which satisfy two or all the three criteria for non- trivial inconsistent theories by da Costa (1974). The weaker one of these paraconsistent system is also able to avoid a series of paradoxes which come up when classical logic is applied to empirical sciences. These paraconsistent systems are based on a 6- valued system of propositional logic for avoiding difficulties in several domains of empirical science (Weingartner (2009)).
Resumo:
Tämän tutkimuksen tavoitteena on selvittää opintojensa alussa olevien yliopisto-opiskelijoiden vaikeimpina pitämät käytännön ohjelmoinnin aihealueet sekä koostaa luentomoniste käytettäväksi seuraavalla alkavalla Käytännön ohjelmointi -kurssilla. Tutkimusmetodina käytettiin konstruktiivista tutkimusmetodia, jossa tavoitteen spesifioinnin jälkeen implementoitiin luentomoniste koostamalla määriteltyjen aihekokonaisuuksien lähdemateriaalia yhtenäiseksi, luettavaksi kokonaisuudeksi. Yliopistoissa ei yleisesti opeteta ohjelmistojen testausta ennen syventäviä ohjelmistotekniikan kursseja, mikä on kuitenkin puute työelämän kannalta. Tässä työssä esitetään perusteluja käytännönläheisten aihekokonaisuuksien painottamiselle ohjelmointikursseilla jo yliopisto-opintojen alkuvaiheessa. Työssä käsitellään Käytännön ohjelmointi -kurssin kurssipalautetta, missä havaittiin opiskelijoiden pitävän kurssin hankalimpina aihealueina linkitettyä listaa, osoittimia, dynaamista muistinhallintaa, tietorakenteita ja versionhallintaa. Työn avulla on pyritty kehittämään käytännön ohjelmoinnin yliopisto-opetusta Lappeenrannan teknillisessä yliopistossa luentomateriaalin avulla, jossa on muun muassa teoriaa, keskeisiä opiskelijoiden tarvitsemia komentoja, www-linkkejä sekä ohjelmoinnin tyyliopas.
Resumo:
Tämä tutkielma kuuluu merkkijonoalgoritmiikan piiriin. Merkkijono S on merkkijonojen X[1..m] ja Y[1..n] yhteinen alijono, mikäli se voidaan muodostaa poistamalla X:stä 0..m ja Y:stä 0..n kappaletta merkkejä mielivaltaisista paikoista. Jos yksikään X:n ja Y:n yhteinen alijono ei ole S:ää pidempi, sanotaan, että S on X:n ja Y:n pisin yhteinen alijono (lyh. PYA). Tässä työssä keskitytään kahden merkkijonon PYAn ratkaisemiseen, mutta ongelma on yleistettävissä myös useammalle jonolle. PYA-ongelmalle on sovelluskohteita – paitsi tietojenkäsittelytieteen niin myös bioinformatiikan osa-alueilla. Tunnetuimpia niistä ovat tekstin ja kuvien tiivistäminen, tiedostojen versionhallinta, hahmontunnistus sekä DNA- ja proteiiniketjujen rakennetta vertaileva tutkimus. Ongelman ratkaisemisen tekee hankalaksi ratkaisualgoritmien riippuvuus syötejonojen useista eri parametreista. Näitä ovat syötejonojen pituuden lisäksi mm. syöttöaakkoston koko, syötteiden merkkijakauma, PYAn suhteellinen osuus lyhyemmän syötejonon pituudesta ja täsmäävien merkkiparien lukumäärä. Täten on vaikeaa kehittää algoritmia, joka toimisi tehokkaasti kaikille ongelman esiintymille. Tutkielman on määrä toimia yhtäältä käsikirjana, jossa esitellään ongelman peruskäsitteiden kuvauksen jälkeen jo aikaisemmin kehitettyjä tarkkoja PYAalgoritmeja. Niiden tarkastelu on ryhmitelty algoritmin toimintamallin mukaan joko rivi, korkeuskäyrä tai diagonaali kerrallaan sekä monisuuntaisesti prosessoiviin. Tarkkojen menetelmien lisäksi esitellään PYAn pituuden ylä- tai alarajan laskevia heuristisia menetelmiä, joiden laskemia tuloksia voidaan hyödyntää joko sellaisinaan tai ohjaamaan tarkan algoritmin suoritusta. Tämä osuus perustuu tutkimusryhmämme julkaisemiin artikkeleihin. Niissä käsitellään ensimmäistä kertaa heuristiikoilla tehostettuja tarkkoja menetelmiä. Toisaalta työ sisältää laajahkon empiirisen tutkimusosuuden, jonka tavoitteena on ollut tehostaa olemassa olevien tarkkojen algoritmien ajoaikaa ja muistinkäyttöä. Kyseiseen tavoitteeseen on pyritty ohjelmointiteknisesti esittelemällä algoritmien toimintamallia hyvin tukevia tietorakenteita ja rajoittamalla algoritmien suorittamaa tuloksetonta laskentaa parantamalla niiden kykyä havainnoida suorituksen aikana saavutettuja välituloksia ja hyödyntää niitä. Tutkielman johtopäätöksinä voidaan yleisesti todeta tarkkojen PYA-algoritmien heuristisen esiprosessoinnin lähes systemaattisesti pienentävän niiden suoritusaikaa ja erityisesti muistintarvetta. Lisäksi algoritmin käyttämällä tietorakenteella on ratkaiseva vaikutus laskennan tehokkuuteen: mitä paikallisempia haku- ja päivitysoperaatiot ovat, sitä tehokkaampaa algoritmin suorittama laskenta on.
Resumo:
PURPOSE: To investigate the prevalence of chromosomal abnormalities in couples with two or more recurrent first trimester miscarriages of unknown cause. METHODS: The study was conducted on 151 women and 94 partners who had an obstetrical history of two or more consecutive first trimester abortions (1-12 weeks of gestation). The controls were 100 healthy women without a history of pregnancy loss. Chromosomal analysis was performed on peripheral blood lymphocytes cultured for 72 hours, using Trypsin-Giemsa (GTG) banding. In all cases, at least 30 metaphases were analyzed and 2 karyotypes were prepared, using light microscopy. The statistical analysis was performed using the Student t-test for normally distributed data and the Mann-Whitney test for non-parametric data. The Kruskal-Wallis test or Analysis of Variance was used to compare the mean values between three or more groups. The software used was Statistical Package for the Social Sciences (SPSS), version 17.0. RESULTS: The frequency of chromosomal abnormalities in women with recurrent miscarriages was 7.3%, including 4.7% with X-chromosome mosaicism, 2% with reciprocal translocations and 0.6% with Robertsonian translocations. A total of 2.1% of the partners of women with recurrent miscarriages had chromosomal abnormalities, including 1% with X-chromosome mosaicism and 1% with inversions. Among the controls, 1% had mosaicism. CONCLUSION: An association between chromosomal abnormalities and recurrent miscarriage in the first trimester of pregnancy (OR=7.7; 95%CI 1.2--170.5) was observed in the present study. Etiologic identification of genetic factors represents important clinical information for genetic counseling and orientation of the couple about the risk for future pregnancies and decreases the number of investigations needed to elucidate the possible causes of miscarriages.
Resumo:
The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.
Resumo:
Työssä tutkitaan tiedonsiirtoa eri modulaatioilla, bittinopeuksilla ja amplitudin voimakkuuksilla ja tuloksia tarkastellaan Bit Error Ration avulla. Signaaleja siirrettiiin myös koodattuna ja vertailtiin koodauksen etuja ja haittoja verrattuna koodaamattomaan tietoon. Datavirta kulkee AXMK-kaapelissa, joko tasasähkön mukana, tai maadoituskaapelissa. Tuloksissa havaittiin, että suurempi bittinopeus ei kasvattanut häviöiden määrää. Koodauksen käyttö toisaalta vähenti bittivirheiden määrää.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
An augmented reality (AR) device must know observer’s location and orientation, i.e. observer’s pose, to be able to correctly register the virtual content to observer’s view. One possible way to determine and continuously follow-up the pose is model-based visual tracking. It supposes that a 3D model of the surroundings is known and that there is a video camera that is fixed to the device. The pose is tracked by comparing the video camera image to the model. Each new pose estimate is usually based on the previous estimate. However, the first estimate must be found out without a prior estimate, i.e. the tracking must be initialized, which in practice means that some model features must be identified from the image and matched to model features. This is known in literature as model-to-image registration problem or simultaneous pose and correspondence problem. This report reviews visual tracking initialization methods that are suitable for visual tracking in ship building environment when the ship CAD model is available. The environment is complex, which makes the initialization non-trivial. The report has been done as part of MARIN project.