18 resultados para feature based cost
Resumo:
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.
Resumo:
Feature-based vocoders, e.g., STRAIGHT, offer a way to manipulate the perceived characteristics of the speech signal in speech transformation and synthesis. For the harmonic model, which provide excellent perceived quality, features for the amplitude parameters already exist (e.g., Line Spectral Frequencies (LSF), Mel-Frequency Cepstral Coefficients (MFCC)). However, because of the wrapping of the phase parameters, phase features are more difficult to design. To randomize the phase of the harmonic model during synthesis, a voicing feature is commonly used, which distinguishes voiced and unvoiced segments. However, voice production allows smooth transitions between voiced/unvoiced states which makes voicing segmentation sometimes tricky to estimate. In this article, two-phase features are suggested to represent the phase of the harmonic model in a uniform way, without voicing decision. The synthesis quality of the resulting vocoder has been evaluated, using subjective listening tests, in the context of resynthesis, pitch scaling, and Hidden Markov Model (HMM)-based synthesis. The experiments show that the suggested signal model is comparable to STRAIGHT or even better in some scenarios. They also reveal some limitations of the harmonic framework itself in the case of high fundamental frequencies.
Resumo:
The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.
Resumo:
This work analyzes a managerial delegation model in which firms that produce a differentiated good can choose between two production technologies: a low marginal cost technology and a high marginal cost technology. For the former to be adopted more investment is needed than for the latter. By giving managers of firms an incentive scheme based on a linear combination of profit and sales revenue, we find that Bertrand competition provides a stronger incentive to adopt the cost-saving technology than the strict profit maximization case. However, the results may be reversed under Cournot competition. We show that if the degree of product substitutability is sufficiently low (high), the incentive to adopt the cost-saving technology is larger under strict profit maximization (strategic delegation).
Resumo:
In this paper, I examine the treatment of competitive profit of professor Varian in his textbook on Microeconomics, as a representative of the “modern” post-Marxian view on competitive profit. I show how, on the one hand, Varian defines profit as the surplus of revenues over cost and, thus, as a part of the value of commodities that is not any cost. On the other hand, however, Varian defines profit as a cost, namely, as the opportunity cost of capital, so that, in competitive conditions, the profit or income of capital is determined by the opportunity cost of capital. I argue that this second definition contradicts the first and that it is based on an incoherent conception of opportunity cost.
Resumo:
Contributed to: "Measuring the Changes": 13th FIG International Symposium on Deformation Measurements and Analysis; 4th IAG Symposium on Geodesy for Geotechnical and Structural Enginering (Lisbon, Portugal, May 12-15, 2008).
Resumo:
A dynamic optimisation framework is adopted to show how tax-based management systems theoretically correct the inefficient allocation of fishing resources derived from the stock externality. Optimal Pigouvian taxes on output (τ) and on inputs (γ) are calculated, compared and considered as potential alternatives to the current regulation of VIII division Cantabrian anchovy fishery. The sensibility analysis of optimal taxes illustrates an asymmetry between (τ) and (γ) when cost price ratio varies. The distributional effects also differ. Special attention will be paid to the real implementation of the tax-based systems in fisheries.
Resumo:
6 p. Paper of the 17th Conference on Sensors and Their Applications held in Dubrovnik, Croatia. Sep 16-18, 2013
Resumo:
Enhancing the handover process in broadband wireless communication deployment has traditionally motivated many research initiatives. In a high-speed railway domain, the challenge is even greater. Owing to the long distances covered, the mobile node gets involved in a compulsory sequence of handover processes. Consequently, poor performance during the execution of these handover processes significantly degrades the global end-to-end performance. This article proposes a new handover strategy for the railway domain: the RMPA handover, a Reliable Mobility Pattern Aware IEEE 802.16 handover strategy "customized" for a high-speed mobility scenario. The stringent high mobility feature is balanced with three other positive features in a high-speed context: mobility pattern awareness, different sources for location discovery techniques, and a previously known traffic data profile. To the best of the authors' knowledge, there is no IEEE 802.16 handover scheme that simultaneously covers the optimization of the handover process itself and the efficient timing of the handover process. Our strategy covers both areas of research while providing a cost-effective and standards-based solution. To schedule the handover process efficiently, the RMPA strategy makes use of a context aware handover policy; that is, a handover policy based on the mobile node mobility pattern, the time required to perform the handover, the neighboring network conditions, the data traffic profile, the received power signal, and current location and speed information of the train. Our proposal merges all these variables in a cross layer interaction in the handover policy engine. It also enhances the handover process itself by establishing the values for the set of handover configuration parameters and mechanisms of the handover process. RMPA is a cost-effective strategy because compatibility with standards-based equipment is guaranteed. The major contributions of the RMPA handover are in areas that have been left open to the handover designer's discretion. Our simulation analysis validates the RMPA handover decision rules and design choices. Our results supporting a high-demand video application in the uplink stream show a significant improvement in the end-to-end quality of service parameters, including end-to-end delay (22%) and jitter (80%), when compared with a policy based on signal-to-noise-ratio information.
Resumo:
[EN]Fundación Zain is developing new built heritage assessment protocols. The goal is to objectivize and standardize the analysis and decision process that leads to determining the degree of protection of built heritage in the Basque Country. The ultimate step in this objectivization and standardization effort will be the development of an information and communication technology (ICT) tool for the assessment of built heritage. This paper presents the ground work carried out to make this tool possible: the automatic, image-based delineation of stone masonry. This is a necessary first step in the development of the tool, as the built heritage that will be assessed consists of stone masonry construction, and many of the features analyzed can be characterized according to the geometry and arrangement of the stones. Much of the assessment is carried out through visual inspection. Thus, this process will be automated by applying image processing on digital images of the elements under inspection. The principal contribution of this paper is the automatic delineation the framework proposed. The other contribution is the performance evaluation of this delineation as the input to a classifier for a geometrically characterized feature of a built heritage object. The element chosen to perform this evaluation is the stone arrangement of masonry walls. The validity of the proposed framework is assessed on real images of masonry walls.
Resumo:
Although many optical fibre applications are based on their capacity to transmit optical signals with low losses, it can also be desirable for the optical fibre to be strongly affected by a certain physical parameter in the environment. In this way, it can be used as a sensor for this parameter. There are many strong arguments for the use of POFs as sensors. In addition to being easy to handle and low cost, they demonstrate advantages common to all multimode optical fibres. These specifically include flexibility, small size, good electromagnetic compatibility behaviour, and in general, the possibility of measuring any phenomenon without physically interacting with it. In this paper, a sensor based on POF is designed and analysed with the aim of measuring the volume and turbidity of a low viscosity fluid, in this case water, as it passes through a pipe. A comparative study with a commercial sensor is provided to validate the proven flow measurement. Likewise, turbidity is measured using different colour dyes. Finally, this paper will present the most significant results and conclusions from all the tests which are carried out.
Resumo:
215 p.
Resumo:
4 p.
Resumo:
4 p.
Resumo:
Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.