989 resultados para Nature inspired algorithms
Resumo:
In 1859, Charles Darwin published his theory of evolution by natural selection, the process occurring based on fitness benefits and fitness costs at the individual level. Traditionally, evolution has been investigated by biologists, but it has induced mathematical approaches, too. For example, adaptive dynamics has proven to be a very applicable framework to the purpose. Its core concept is the invasion fitness, the sign of which tells whether a mutant phenotype can invade the prevalent phenotype. In this thesis, four real-world applications on evolutionary questions are provided. Inspiration for the first two studies arose from a cold-adapted species, American pika. First, it is studied how the global climate change may affect the evolution of dispersal and viability of pika metapopulations. Based on the results gained here, it is shown that the evolution of dispersal can result in extinction and indeed, evolution of dispersalshould be incorporated into the viability analysis of species living in fragmented habitats. The second study is focused on the evolution of densitydependent dispersal in metapopulations with small habitat patches. It resulted a very surprising unintuitive evolutionary phenomenon, how a non-monotone density-dependent dispersal may evolve. Cooperation is surprisingly common in many levels of life, despite of its obvious vulnerability to selfish cheating. This motivated two applications. First, it is shown that density-dependent cooperative investment can evolve to have a qualitatively different, monotone or non-monotone, form depending on modelling details. The last study investigates the evolution of investing into two public-goods resources. The results suggest one general path by which labour division can arise via evolutionary branching. In addition to applications, two novel methodological derivations of fitness measures in structured metapopulations are given.
Resumo:
Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
Pilocarpine-induced (320 mg/kg, ip) status epilepticus (SE) in adult (2-3 months) male Wistar rats results in extensive neuronal damage in limbic structures. Here we investigated whether the induction of a second SE (N = 6) would generate damage and cell loss similar to that seen after a first SE (N = 9). Counts of silver-stained (indicative of cell damage) cells, using the Gallyas argyrophil III method, revealed a markedly lower neuronal injury in animals submitted to re-induction of SE compared to rats exposed to a single episode of pilocarpine-induced SE. This effect could be explained as follows: 1) the first SE removes the vulnerable cells, leaving behind resistant cells that are not affected by the second SE; 2) the first SE confers increased resistance to the remaining cells, analogous to the process of ischemic tolerance. Counting of Nissl-stained cells was performed to differentiate between these alternative mechanisms. Our data indicate that different neuronal populations react differently to SE induction. For some brain areas most, if not all, of the vulnerable cells are lost after an initial insult leaving only relatively resistant cells and little space for further damage or cell loss. For some other brain areas, in contrast, our data support the hypothesis that surviving cells might be modified by the initial insult which would confer a sort of excitotoxic tolerance. As a consequence of both mechanisms, subsequent insults after an initial insult result in very little damage regardless of their intensity.
Resumo:
Tekijänoikeudellisten syiden vuoksi teoksen kokotekstissä olevien kuvien teknistä laatua on jouduttu heikentämään. ---- Tutkin yhden työläisen, maalari Frans Lindin (1903–1988) ympäristösuhdetta lapsuudesta viimeisiin elinvuosiin asti eli kirjoitan hänen ympäristöelämäkertaansa. Tarkastelen hänen suhdettaan luontoon ja kulttuuriympäristöön hänen vapaa-aikanaan. Tutkimukseen kuuluu johdanto-osan lisäksi yksi suomenkielinen ja neljä eng¬lanninkielistä artikkelia. Tutkin erityisesti Lindin paikkoja ja ympäristökokemuksia; tarkastelen hänen kotejaan ja muita tärkeitä paikkojaan; käsittelen hänen suhdettaan veteen ja vesistöihin; selvitän hänen maalausharrastustaan ja ympäristökokemustensa ulottuvuuksia sekä osoitan muistitiedon tärkeyden kokemusten tulkinnassa. Keskeiset lähteeni ovat vuosina 1985–2004 tekemäni Lindin ja 15 muun henkilön muistitietohaastattelut sekä puolensataa Lindin maalaamaa maisemaa, jotka dokumentoivat luontoa, yksityistaloja, julkisia rakennuksia ja kaupunkinäkymiä. Lisäksi käytän asiakirjoja, sanomalehtiaineistoa ja historiateoksia sekä omia havaintojani. Lähestyn Lindin ympäristösuhdetta kulttuurihistoriallisen ympäristötutkimuksen, henkilöhistorian ja muistitietohistorian näkökulmasta. Ympäristösuhde tarkoittaa kaikkea, mitä ihminen tekee ympäristössään, ympäristölleen ja ympäristönsä vaikutuksesta tai innoittamana. Se on sosiaalinen, historiallinen ja kulttuurinen ilmiö, joka kehittyy varhaislapsuudesta lähtien kodista käsin ja muotoutuu elämän loppuun asti. Lindin ympäristösuhde kehittyi eri ihmisten vaikutuksesta, eri aikoina ja eri toiminnoissa ja konkretisoitui paikoissa, jotka olivat koettuja, muistettuja ja kerrottuja. Ne olivat myös lyhyt- tai pitkäaikaisia tai ainut-kertaisia, aktiivisia tai passiivisia, nykyisiä tai menneitä. Laatimani Lindin paikkojen luokituksen perustalta voidaan tutkia muidenkin ihmisten paikkasuhdetta. Lindin ympäristösuhteen moniulotteisuus ja rikkaus ilmeni sekä haastatteluissa että maisemamaalauksissa. Hänen ympäristökokemuksensa olivat arkisia, mutta niissä ilmenivät myös tunteiden, tiedon, nostalgian, edistyksen, omistamisen ja liikkumisen ulottuvuudet. Tutkimukseni antaa lähtökohdan kenen tahansa, varsinkin elossa olevan henkilön, ympäristösuhteen tutkimiselle.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
Questions concerning perception are as old as the field of philosophy itself. Using the first-person perspective as a starting point and philosophical documents, the study examines the relationship between knowledge and perception. The problem is that of how one knows what one immediately perceives. The everyday belief that an object of perception is known to be a material object on grounds of perception is demonstrated as unreliable. It is possible that directly perceived sensible particulars are mind-internal images, shapes, sounds, touches, tastes and smells. According to the appearance/reality distinction, the world of perception is the apparent realm, not the real external world. However, the distinction does not necessarily refute the existence of the external world. We have a causal connection with the external world via mind-internal particulars, and therefore we have indirect knowledge about the external world through perceptual experience. The research especially concerns the reasons for George Berkeley’s claim that material things are mind-dependent ideas that really are perceived. The necessity of a perceiver’s own qualities for perceptual experience, such as mind, consciousness, and the brain, supports the causal theory of perception. Finally, it is asked why mind-internal entities are present when perceiving an object. Perception would not directly discern material objects without the presupposition of extra entities located between a perceiver and the external world. Nevertheless, the results show that perception is not sufficient to know what a perceptual object is, and that the existence of appearances is necessary to know that the external world is being perceived. However, the impossibility of matter does not follow from Berkeley’s theory. The main result of the research is that singular knowledge claims about the external world never refer directly and immediately to the objects of the external world. A perceiver’s own qualities affect how perceptual objects appear in a perceptual situation.
Resumo:
Hypertrophy is a major predictor of progressive heart disease and has an adverse prognosis. MicroRNAs (miRNAs) that accumulate during the course of cardiac hypertrophy may participate in the process. However, the nature of any interaction between a hypertrophy-specific signaling pathway and aberrant expression of miRNAs remains unclear. In this study, Spague Dawley male rats were treated with transverse aortic constriction (TAC) surgery to mimic pathological hypertrophy. Hearts were isolated from TAC and sham operated rats (n=5 for each group at 5, 10, 15, and 20 days after surgery) for miRNA microarray assay. The miRNAs dysexpressed during hypertrophy were further analyzed using a combination of bioinformatics algorithms in order to predict possible targets. Increased expression of the target genes identified in diverse signaling pathways was also analyzed. Two sets of miRNAs were identified, showing different expression patterns during hypertrophy. Bioinformatics analysis suggested the miRNAs may regulate multiple hypertrophy-specific signaling pathways by targeting the member genes and the interaction of miRNA and mRNA might form a network that leads to cardiac hypertrophy. In addition, the multifold changes in several miRNAs suggested that upregulation of rno-miR-331*, rno-miR-3596b, rno-miR-3557-5p and downregulation of rno-miR-10a, miR-221, miR-190, miR-451 could be seen as biomarkers of prognosis in clinical therapy of heart failure. This study described, for the first time, a potential mechanism of cardiac hypertrophy involving multiple signaling pathways that control up- and downregulation of miRNAs. It represents a first step in the systematic discovery of miRNA function in cardiovascular hypertrophy.
Resumo:
Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.
Resumo:
Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.
Resumo:
Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.
Resumo:
Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.