964 resultados para Ephemeral Computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A comparative study of the Fourier (FT) and the wavelet transforms (WT) for instrumental signal denoising is presented. The basic principles of wavelet theory are described in a succinct and simplified manner. For illustration, FT and WT are used to filter UV-VIS and plasma emission spectra using MATLAB software for computation. Results show that FT and WT filters are comparable when the signal does not display sharp peaks (UV-VIS spectra), but the WT yields a better filtering when the filling factor of the signal is small (plasma spectra), since it causes low peak distortion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper focuses on cooperative games with transferable utility. We propose the computation of two solutions, the Shapley value for n agents and the nucleolus with a maximum of four agents. The current approach is also focused on conflicting claims problems, a particular case of coalitional games. We provide the computation of the most well-known and used claims solutions: the proportional, the constrained equal awards, the constrained equal losses, the Talmud and the random arrival rules. Keywords: Cooperative game, Shapley value, nucleolus, claims problem, claims rule, bankruptcy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proposed transdisciplinary field of ‘complexics’ would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not “to reduce complexity to simplicity, [but] totranslate complexity into theory”.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the ‘sciences’ and in the ‘arts’. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proposed transdisciplinary field of ‘complexics’ would bring together allcontemporary efforts in any specific disciplines or by any researchersspecifically devoted to constructing tools, procedures, models and conceptsintended for transversal application that are aimed at understanding andexplaining the most interwoven and dynamic phenomena of reality. Our aimneeds to be, as Morin says, not “to reduce complexity to simplicity, [but] totranslate complexity into theory”.New tools for the conception, apprehension and treatment of the data ofexperience will need to be devised to complement existing ones and toenable us to make headway toward practices that better fit complexictheories. New mathematical and computational contributions have alreadycontinued to grow in number, thanks primarily to scholars in statisticalphysics and computer science, who are now taking an interest in social andeconomic phenomena.Certainly, these methodological innovations put into question and againmake us take note of the excessive separation between the training receivedby researchers in the ‘sciences’ and in the ‘arts’. Closer collaborationbetween these two subsets would, in all likelihood, be much moreenergising and creative than their current mutual distance. Humancomplexics must be seen as multi-methodological, insofar as necessarycombining quantitative-computation methodologies and more qualitativemethodologies aimed at understanding the mental and emotional world ofpeople.In the final analysis, however, models always have a narrative runningbehind them that reflects the attempts of a human being to understand theworld, and models are always interpreted on that basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a methodology for the computation of Raman scattering cross-sections and depolarization ratios within the Placzek Polarizability Theory is described. The polarizability gradients are derived from the values of the dynamic polarizabilities computed at the excitation frequencies using ab initio Linear Response Theory. A sample application of the computational program, at the HF, MP2 and CCSD levels of theory, is presented for H2O and NH3. The results show that high correlated levels of theory are needed to achieve good agreement with experimental data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper re-examines the null of stationary of real exchange rate for a panel of seventeen OECD developed countries during the post-Bretton Woods era. Our analysis simultaneously considers both the presence of cross-section dependence and multiple structural breaks that have not received much attention in previous panel methods of long-run PPP. Empirical results indicate that there is little evidence in favor of PPP hypothesis when the analysis does not account for structural breaks. This conclusion is reversed when structural breaks are considered in computation of the panel statistics. We also compute point estimates of half-life separately for idiosyncratic and common factor components and find that it is always below one year.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current technology trends in medical device industry calls for fabrication of massive arrays of microfeatures such as microchannels on to nonsilicon material substrates with high accuracy, superior precision, and high throughput. Microchannels are typical features used in medical devices for medication dosing into the human body, analyzing DNA arrays or cell cultures. In this study, the capabilities of machining systems for micro-end milling have been evaluated by conducting experiments, regression modeling, and response surface methodology. In machining experiments by using micromilling, arrays of microchannels are fabricated on aluminium and titanium plates, and the feature size and accuracy (width and depth) and surface roughness are measured. Multicriteria decision making for material and process parameters selection for desired accuracy is investigated by using particle swarm optimization (PSO) method, which is an evolutionary computation method inspired by genetic algorithms (GA). Appropriate regression models are utilized within the PSO and optimum selection of micromilling parameters; microchannel feature accuracy and surface roughness are performed. An analysis for optimal micromachining parameters in decision variable space is also conducted. This study demonstrates the advantages of evolutionary computing algorithms in micromilling decision making and process optimization investigations and can be expanded to other applications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we report a preliminary analysis of the impact of Global Navigation Satellite System Reflections (GNSS-R) data on ionospheric monitoring over the oceans. The focus will be on a single polar Low Earth Orbiter (LEO) mission exploiting GNSS-R as well as Navigation (GNSS-N) and Occultation (GNSS-O) total electron content (TEC) measurements. In order to assess impact of the data, we have simulated GNSS-R/O/N TEC data as would be measured from the LEO and from International Geodesic Service (IGS) ground stations, with an electron density (ED) field generated using a climatic ionospheric model. We have also developed a new tomographic approach inspired by the physics of the hydrogen atom and used it to effectively retrieve the ED field from the simulated TEC data near the orbital plane. The tomographic inversion results demonstrate the significant impact of GNSS-R: three-dimensional ionospheric ED fields are retrieved over the oceans quite accurately, even as, in the spirit of this initial study, the simulation and inversion approaches avoided intensive computation and sophisticated algorithmic elements (such as spatio-temporal smoothing). We conclude that GNSS-R data over the oceans can contribute significantly to a Global/GNSS Ionospheric Observation System (GIOS). Index Terms Global Navigation Satellite System (GNSS), Global Navigation Satellite System Reflections (GNSS-R), ionosphere, Low Earth Orbiter (LEO), tomography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Compositional data (concentrations) are common in geosciences. Neglecting its character mey lead to erroneous conclusions. Spurious correlation (K. Pearson, 1897) has disastrous consequences. On the basis of the pioneering work by J. Aitchison in the 1980s, a methodology free of these drawbacks is now available. The geometry of the símplex allows the representation of compositions using orthogonal co-ordinares, to which usual statistical methods can be applied, thus facilating computation ans analysis. The use of (log) ratios precludes the interpretation of single concentrations disregarding their relative character. A hydro-chemical data set is used to illustrate the point

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The design methods and languages targeted to modern System-on-Chip designs are facing tremendous pressure of the ever-increasing complexity, power, and speed requirements. To estimate any of these three metrics, there is a trade-off between accuracy and abstraction level of detail in which a system under design is analyzed. The more detailed the description, the more accurate the simulation will be, but, on the other hand, the more time consuming it will be. Moreover, a designer wants to make decisions as early as possible in the design flow to avoid costly design backtracking. To answer the challenges posed upon System-on-chip designs, this thesis introduces a formal, power aware framework, its development methods, and methods to constraint and analyze power consumption of the system under design. This thesis discusses on power analysis of synchronous and asynchronous systems not forgetting the communication aspects of these systems. The presented framework is built upon the Timed Action System formalism, which offer an environment to analyze and constraint the functional and temporal behavior of the system at high abstraction level. Furthermore, due to the complexity of System-on-Chip designs, the possibility to abstract unnecessary implementation details at higher abstraction levels is an essential part of the introduced design framework. With the encapsulation and abstraction techniques incorporated with the procedure based communication allows a designer to use the presented power aware framework in modeling these large scale systems. The introduced techniques also enable one to subdivide the development of communication and computation into own tasks. This property is taken into account in the power analysis part as well. Furthermore, the presented framework is developed in a way that it can be used throughout the design project. In other words, a designer is able to model and analyze systems from an abstract specification down to an implementable specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metaheuristic methods have become increasingly popular approaches in solving global optimization problems. From a practical viewpoint, it is often desirable to perform multimodal optimization which, enables the search of more than one optimal solution to the task at hand. Population-based metaheuristic methods offer a natural basis for multimodal optimization. The topic has received increasing interest especially in the evolutionary computation community. Several niching approaches have been suggested to allow multimodal optimization using evolutionary algorithms. Most global optimization approaches, including metaheuristics, contain global and local search phases. The requirement to locate several optima sets additional requirements for the design of algorithms to be effective in both respects in the context of multimodal optimization. In this thesis, several different multimodal optimization algorithms are studied in regard to how their implementation in the global and local search phases affect their performance in different problems. The study concentrates especially on variations of the Differential Evolution algorithm and their capabilities in multimodal optimization. To separate the global and local search search phases, three multimodal optimization algorithms are proposed, two of which hybridize the Differential Evolution with a local search method. As the theoretical background behind the operation of metaheuristics is not generally thoroughly understood, the research relies heavily on experimental studies in finding out the properties of different approaches. To achieve reliable experimental information, the experimental environment must be carefully chosen to contain appropriate and adequately varying problems. The available selection of multimodal test problems is, however, rather limited, and no general framework exists. As a part of this thesis, such a framework for generating tunable test functions for evaluating different methods of multimodal optimization experimentally is provided and used for testing the algorithms. The results demonstrate that an efficient local phase is essential for creating efficient multimodal optimization algorithms. Adding a suitable global phase has the potential to boost the performance significantly, but the weak local phase may invalidate the advantages gained from the global phase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents two graphical user interfaces for the project DigiQ - Fusion of Digital and Visual Print Quality, a project for computationally modeling the subjective human experience of print quality by measuring the image with certain metrics. After presenting the user interfaces, methods for reducing the computation time of several of the metrics and the image registration process required to compute the metrics, and details of their performance are given. The weighted sample method for the image registration process was able to signifigantly decrease the calculation times while resulting in some error. The random sampling method for the metrics greatly reduced calculation time while maintaining excellent accuracy, but worked with only two of the metrics.