908 resultados para Multidimensional matching


Relevância:

70.00% 70.00%

Publicador:

Resumo:

We construct a frictionless matching model of the marriage market where women have bidimensional attributes, one continuous (income) and the other dichotomous (home ability). Equilibrium in the marriage market determines intrahousehold allocation of resources and female labor participation. Our model is able to predict partial non-assortative matching, with rich men marrying women with low income but high home ability. We then perform numerical exercises to evaluate the impacts of income taxes in individual welfare and find that there is considerable divergence in the female labor participation response to taxes between the short run and the long run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on an algorithm for pattern matching in character strings, we implement a pattern matching machine that searches for occurrences of patterns in multidimensional time series. Before the search process takes place, time series are encoded in user-designed alphabets. The patterns, on the other hand, are formulated as regular expressions that are composed of letters from these alphabets and operators. Furthermore, we develop a genetic algorithm to breed patterns that maximize a user-defined fitness function. In an application to financial data, we show that patterns bred to predict high exchange rates volatility in training samples retain statistically significant predictive power in validation samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology scaling increasingly emphasizes complexity and non-ideality of the electrical behavior of semiconductor devices and boosts interest on alternatives to the conventional planar MOSFET architecture. TCAD simulation tools are fundamental to the analysis and development of new technology generations. However, the increasing device complexity is reflected in an augmented dimensionality of the problems to be solved. The trade-off between accuracy and computational cost of the simulation is especially influenced by domain discretization: mesh generation is therefore one of the most critical steps and automatic approaches are sought. Moreover, the problem size is further increased by process variations, calling for a statistical representation of the single device through an ensemble of microscopically different instances. The aim of this thesis is to present multi-disciplinary approaches to handle this increasing problem dimensionality in a numerical simulation perspective. The topic of mesh generation is tackled by presenting a new Wavelet-based Adaptive Method (WAM) for the automatic refinement of 2D and 3D domain discretizations. Multiresolution techniques and efficient signal processing algorithms are exploited to increase grid resolution in the domain regions where relevant physical phenomena take place. Moreover, the grid is dynamically adapted to follow solution changes produced by bias variations and quality criteria are imposed on the produced meshes. The further dimensionality increase due to variability in extremely scaled devices is considered with reference to two increasingly critical phenomena, namely line-edge roughness (LER) and random dopant fluctuations (RD). The impact of such phenomena on FinFET devices, which represent a promising alternative to planar CMOS technology, is estimated through 2D and 3D TCAD simulations and statistical tools, taking into account matching performance of single devices as well as basic circuit blocks such as SRAMs. Several process options are compared, including resist- and spacer-defined fin patterning as well as different doping profile definitions. Combining statistical simulations with experimental data, potentialities and shortcomings of the FinFET architecture are analyzed and useful design guidelines are provided, which boost feasibility of this technology for mainstream applications in sub-45 nm generation integrated circuits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Researchers frequently have to analyze scales in which some participants have failed to respond to some items. In this paper we focus on the exploratory factor analysis of multidimensional scales (i.e., scales that consist of a number of subscales) where each subscale is made up of a number of Likert-type items, and the aim of the analysis is to estimate participants' scores on the corresponding latent traits. We propose a new approach to deal with missing responses in such a situation that is based on (1) multiple imputation of non-responses and (2) simultaneous rotation of the imputed datasets. We applied the approach in a real dataset where missing responses were artificially introduced following a real pattern of non-responses, and a simulation study based on artificial datasets. The results show that our approach (specifically, Hot-Deck multiple imputation followed of Consensus Promin rotation) was able to successfully compute factor score estimates even for participants that have missing data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated approaches for the profiling of coffee using two multidimensional approaches: (1) a multi-detection process and (2) a multi-separation process employing HPLC. The first approach compared multidetection techniques of conventional High Performance Liquid Chromatography (HPLC) hyphenated with a detector (DPPH•, UV-Vis and MS), and multiplexed mode via HPLC with an Active Flow Technology (AFT) column in Parallel Segmented Flow (PSF) format with DPPH• detection, UV-Vis and MS running simultaneously. Multiplexed HPLCPSF enabled the determination of key chemical entities by reducing the data complexity of the sample whilst obtaining a greater degree of molecule-specific information within a fraction of the time it takes using conventional multi-detection processes. DPPH•, UV-Vis and MS (TIC) were multiplexed for the analysis of espresso coffee and decaffeinated espresso coffee. Up to 20 DPPH• peaks were detected for each sample, and with direct retention time peak matching, 70% of DPPH• peaks gave a UV-Vis response for the espresso coffee and 95% for the decaffeinated espresso coffee. The second approach involved the use of a two-dimensional (2D) HPLC system to expand the separation space and separation power for the analysis of coffee, focusing on the resolution and detection of coeluting and overlapping peaks, which was beyond the limits of conventional HPLC in resolving complex samples. The 2DHPLC analysis resulted with the detection of 176 peaks and a closer observation showed the presence of an additional 17 peaks in a cut section where in 1D mode only one peak was observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .

Relevância:

20.00% 20.00%

Publicador: