44 resultados para Density-based Scanning Algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Laboratory of Intelligent Machine researches and develops energy-efficient power transmissions and automation for mobile construction machines and industrial processes. The laboratory's particular areas of expertise include mechatronic machine design using virtual technologies and simulators and demanding industrial robotics. The laboratory has collaborated extensively with industrial actors and it has participated in significant international research projects, particularly in the field of robotics. For years, dSPACE tools were the lonely hardware which was used in the lab to develop different control algorithms in real-time. dSPACE's hardware systems are in widespread use in the automotive industry and are also employed in drives, aerospace, and industrial automation. But new competitors are developing new sophisticated systems and their features convinced the laboratory to test new products. One of these competitors is National Instrument (NI). In order to get to know the specifications and capabilities of NI tools, an agreement was made to test a NI evolutionary system. This system is used to control a 1-D hydraulic slider. The objective of this research project is to develop a control scheme for the teleoperation of a hydraulically driven manipulator, and to implement a control algorithm between human and machine interaction, and machine and task environment interaction both on NI and dSPACE systems simultaneously and to compare the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today’s electrical machine technology allows increasing the wind turbine output power by an order of magnitude from the technology that existed only ten years ago. However, it is sometimes argued that high-power direct-drive wind turbine generators will prove to be of limited practical importance because of their relatively large size and weight. The limited space for the generator in a wind turbine application together with the growing use of wind energy pose a challenge for the design engineers who are trying to increase torque without making the generator larger. When it comes to high torque density, the limiting factor in every electrical machine is heat, and if the electrical machine parts exceed their maximum allowable continuous operating temperature, even for a short time, they can suffer permanent damage. Therefore, highly efficient thermal design or cooling methods is needed. One of the promising solutions to enhance heat transfer performances of high-power, low-speed electrical machines is the direct cooling of the windings. This doctoral dissertation proposes a rotor-surface-magnet synchronous generator with a fractional slot nonoverlapping stator winding made of hollow conductors, through which liquid coolant can be passed directly during the application of current in order to increase the convective heat transfer capabilities and reduce the generator mass. This doctoral dissertation focuses on the electromagnetic design of a liquid-cooled direct-drive permanent-magnet synchronous generator (LC DD-PMSG) for a directdrive wind turbine application. The analytical calculation of the magnetic field distribution is carried out with the ambition of fast and accurate predicting of the main dimensions of the machine and especially the thickness of the permanent magnets; the generator electromagnetic parameters as well as the design optimization. The focus is on the generator design with a fractional slot non-overlapping winding placed into open stator slots. This is an a priori selection to guarantee easy manufacturing of the LC winding. A thermal analysis of the LC DD-PMSG based on a lumped parameter thermal model takes place with the ambition of evaluating the generator thermal performance. The thermal model was adapted to take into account the uneven copper loss distribution resulting from the skin effect as well as the effect of temperature on the copper winding resistance and the thermophysical properties of the coolant. The developed lumpedparameter thermal model and the analytical calculation of the magnetic field distribution can both be integrated with the presented algorithm to optimize an LC DD-PMSG design. Based on an instrumented small prototype with liquid-cooled tooth-coils, the following targets have been achieved: experimental determination of the performance of the direct liquid cooling of the stator winding and validating the temperatures predicted by an analytical thermal model; proving the feasibility of manufacturing the liquid-cooled tooth-coil winding; moreover, demonstration of the objectives of the project to potential customers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper-based analytical technologies enable quantitative and rapid analysis of analytes from various application areas including healthcare, environmental monitoring and food safety. Because paper is a planar, flexible and light weight substrate, the devices can be transported and disposed easily. Diagnostic devices are especially valuable in resourcelimited environments where diagnosis as well as monitoring of therapy can be made even without electricity by using e.g. colorimetric assays. On the other hand, platforms including printed electrodes can be coupled with hand-held readers. They enable electrochemical detection with improved reliability, sensitivity and selectivity compared with colorimetric assays. In this thesis, different roll-to-roll compatible printing technologies were utilized for the fabrication of low-cost paper-based sensor platforms. The platforms intended for colorimetric assays and microfluidics were fabricated by patterning the paper substrates with hydrophobic vinyl substituted polydimethylsiloxane (PDMS) -based ink. Depending on the barrier properties of the substrate, the ink either penetrates into the paper structure creating e.g. microfluidic channel structures or remains on the surface creating a 2D analog of a microplate. The printed PDMS can be cured by a roll-ro-roll compatible infrared (IR) sintering method. The performance of these platforms was studied by printing glucose oxidase-based ink on the PDMS-free reaction areas. The subsequent application of the glucose analyte changed the colour of the white reaction area to purple with the colour density and intensity depending on the concentration of the glucose solution. Printed electrochemical cell platforms were fabricated on paper substrates with appropriate barrier properties by inkjet-printing metal nanoparticle based inks and by IR sintering them into conducting electrodes. Printed PDMS arrays were used for directing the liquid analyte onto the predetermined spots on the electrodes. Various electrochemical measurements were carried out both with the bare electrodes and electrodes functionalized with e.g. self assembled monolayers. Electrochemical glucose sensor was selected as a proof-of-concept device to demonstrate the potential of the printed electronic platforms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Successful management of rivers requires an understanding of the fluvial processes that govern them. This, in turn cannot be achieved without a means of quantifying their geomorphology and hydrology and the spatio-temporal interactions between them, that is, their hydromorphology. For a long time, it has been laborious and time-consuming to measure river topography, especially in the submerged part of the channel. The measurement of the flow field has been challenging as well, and hence, such measurements have long been sparse in natural environments. Technological advancements in the field of remote sensing in the recent years have opened up new possibilities for capturing synoptic information on river environments. This thesis presents new developments in fluvial remote sensing of both topography and water flow. A set of close-range remote sensing methods is employed to eventually construct a high-resolution unified empirical hydromorphological model, that is, river channel and floodplain topography and three-dimensional areal flow field. Empirical as well as hydraulic theory-based optical remote sensing methods are tested and evaluated using normal colour aerial photographs and sonar calibration and reference measurements on a rocky-bed sub-Arctic river. The empirical optical bathymetry model is developed further by the introduction of a deep-water radiance parameter estimation algorithm that extends the field of application of the model to shallow streams. The effect of this parameter on the model is also assessed in a study of a sandy-bed sub-Arctic river using close-range high-resolution aerial photography, presenting one of the first examples of fluvial bathymetry modelling from unmanned aerial vehicles (UAV). Further close-range remote sensing methods are added to complete the topography integrating the river bed with the floodplain to create a seamless high-resolution topography. Boat- cart- and backpack-based mobile laser scanning (MLS) are used to measure the topography of the dry part of the channel at a high resolution and accuracy. Multitemporal MLS is evaluated along with UAV-based photogrammetry against terrestrial laser scanning reference data and merged with UAV-based bathymetry to create a two-year series of seamless digital terrain models. These allow the evaluation of the methodology for conducting high-resolution change analysis of the entire channel. The remote sensing based model of hydromorphology is completed by a new methodology for mapping the flow field in 3D. An acoustic Doppler current profiler (ADCP) is deployed on a remote-controlled boat with a survey-grade global navigation satellite system (GNSS) receiver, allowing the positioning of the areally sampled 3D flow vectors in 3D space as a point cloud and its interpolation into a 3D matrix allows a quantitative volumetric flow analysis. Multitemporal areal 3D flow field data show the evolution of the flow field during a snow-melt flood event. The combination of the underwater and dry topography with the flow field yields a compete model of river hydromorphology at the reach scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Saimaa ringed seal is one of the most endangered seals in the world. It is a symbol of Lake Saimaa and a lot of effort have been applied to save it. Traditional methods of seal monitoring include capturing the animals and installing sensors on their bodies. These invasive methods for identifying can be painful and affect the behavior of the animals. Automatic identification of seals using computer vision provides a more humane method for the monitoring. This Master's thesis focuses on automatic image-based identification of the Saimaa ringed seals. This consists of detection and segmentation of a seal in an image, analysis of its ring patterns, and identification of the detected seal based on the features of the ring patterns. The proposed algorithm is evaluated with a dataset of 131 individual seals. Based on the experiments with 363 images, 81\% of the images were successfully segmented automatically. Furthermore, a new approach for interactive identification of Saimaa ringed seals is proposed. The results of this research are a starting point for future research in the topic of seal photo-identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure and optical properties of thin films based on C60 materials are studied. Reproducible vacuum method of thin fullerene films production with Cd impurity on Si, glass and mica surfaces is developed. Surface morphology of the films are investigated by AFM and SEM methods. The ab initio quantum - chemical calculations of the geometry, total energy and excited energy states of complex fullerene- cadmium telluride supramolecules are performed. Photoluminescence spectra of composite thin films based on C60 before and after X-ray irradiation were measured. The intensity of additional peaks is defined as the charge composition due to the type of substrate. These results are interpreted as an appearance of the dipole-allowed transitions in the fullerene excited singlet states spectrum cause of an interference with cadmium telluride. X-ray irradiated films were investigated, and additional peaks in photoluminescence spectra were detected. These peaks appear as a result of molecular complexes formation from C60CdTe mixture and dimerization of the films. Density functional B3LYP quantum-chemical calculations for C60CdTe, molecular complexes, (C60)2 and C120O dimers were performed to elucidate some experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The human striatum is a heterogeneous structure representing a major part of the dopamine (DA) system’s basal ganglia input and output. Positron emission tomography (PET) is a powerful tool for imaging DA neurotransmission. However, PET measurements suffer from bias caused by the low spatial resolution, especially when imaging small, D2/3 -rich structures such as the ventral striatum (VST). The brain dedicated high-resolution PET scanner, ECAT HRRT (Siemens Medical Solutions, Knoxville, TN, USA) has superior resolution capabilities than its predecessors. In the quantification of striatal D2/3 binding, the in vivo highly selective D2/3 antagonist [11C] raclopride is recognized as a well-validated tracer. The aim of this thesis was to use a traditional test-retest setting to evaluate the feasibility of utilizing the HRRT scanner for exploring not only small brain regions such as the VST but also low density D2/3 areas such as cortex. It was demonstrated that the measurement of striatal D2/3 binding was very reliable, even when studying small brain structures or prolonging the scanning interval. Furthermore, the cortical test-retest parameters displayed good to moderate reproducibility. For the first time in vivo, it was revealed that there are significant divergent rostrocaudal gradients of [11C]raclopride binding in striatal subregions. These results indicate that high-resolution [11C]raclopride PET is very reliable and its improved sensitivity means that it should be possible to detect the often very subtle changes occurring in DA transmission. Another major advantage is the possibility to measure simultaneously striatal and cortical areas. The divergent gradients of D2/3 binding may have functional significance and the average distribution binding could serve as the basis for a future database. Key words: dopamine, PET, HRRT, [11C]raclopride, striatum, VST, gradients, test-retest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind power is a rapidly developing, low-emission form of energy production. In Fin-land, the official objective is to increase wind power capacity from the current 1 005 MW up to 3 500–4 000 MW by 2025. By the end of April 2015, the total capacity of all wind power project being planned in Finland had surpassed 11 000 MW. As the amount of projects in Finland is record high, an increasing amount of infrastructure is also being planned and constructed. Traditionally, these planning operations are conducted using manual and labor-intensive work methods that are prone to subjectivity. This study introduces a GIS-based methodology for determining optimal paths to sup-port the planning of onshore wind park infrastructure alignment in Nordanå-Lövböle wind park located on the island of Kemiönsaari in Southwest Finland. The presented methodology utilizes a least-cost path (LCP) algorithm for searching of optimal paths within a high resolution real-world terrain dataset derived from airborne lidar scannings. In addition, planning data is used to provide a realistic planning framework for the anal-ysis. In order to produce realistic results, the physiographic and planning datasets are standardized and weighted according to qualitative suitability assessments by utilizing methods and practices offered by multi-criteria evaluation (MCE). The results are pre-sented as scenarios to correspond various different planning objectives. Finally, the methodology is documented by using tools of Business Process Management (BPM). The results show that the presented methodology can be effectively used to search and identify extensive, 20 to 35 kilometers long networks of paths that correspond to certain optimization objectives in the study area. The utilization of high-resolution terrain data produces a more objective and more detailed path alignment plan. This study demon-strates that the presented methodology can be practically applied to support a wind power infrastructure alignment planning process. The six-phase structure of the method-ology allows straightforward incorporation of different optimization objectives. The methodology responds well to combining quantitative and qualitative data. Additional-ly, the careful documentation presents an example of how the methodology can be eval-uated and developed as a business process. This thesis also shows that more emphasis on the research of algorithm-based, more objective methods for the planning of infrastruc-ture alignment is desirable, as technological development has only recently started to realize the potential of these computational methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The future of paying in the age of digitalization is a topic that includes varied visions. This master’s thesis explores images of the future of paying in the Single Euro Payment Area (SEPA) up to 2020 and 2025 through the views of experts specialized in paying. This study was commissioned by a credit management company in order to obtain more detailed information about the future of paying. Specifically, this thesis investigates what could be the most used payment methods in the future, what items could work as a medium of exchange in 2020 and how will they evolve towards the year 2025. Changing consumer behavior, trends connected to payment methods, security and private issues of new cashless payment methods were also part of this study. In the empirical part of the study the experts’ ideas about probable and preferable future images of paying were investigated through a two-round Disaggregative Delphi method. The questionnaire included numeric statements and open questions. Three alternative future images were created with the help of cluster analysis: “Unsurprising Future”, “Technology Driven Future” and “The Age of the Customer”. The plausible images had similarities and differences, which were reflected to the previous studies in the literature review. The study’s findings were formed based on the images of futures’ similarities and to the open questions answers that were received from the questionnaire. The main conclusion of the study was that development of technology will unify and diversify SEPA; the trend in 2020 seems to be towards more cashless payment methods but their usage depends on the countries’ financial possibilities and customer preferences. Mobile payments, cards and cash will be the main payment methods but the banks will have competitors from outside the financial sector. Wearable payment methods and NFC technology are seen as widely growing trends but subcutaneous payment devices will likely keep their niche position until 2025. In the meantime, security and private issues are seen to increase because of identity thefts and various frauds. Simultaneously, privacy will lose its meaning to younger consumers who are used to sharing their transaction and personal data with third parties in order to get access to attractive services. Easier access to consumers’ transaction data will probably open the door for hackers and cause new risks in paying processes. There exist many roads to future, and this study was not an attempt to give any complete answers about it even if some plausible assumptions about the future’s course were provided.