976 resultados para Challenging problems


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A robot’s drive has to exert appropriate driving forces that can keep its arm and end effector at the proper position, velocity and acceleration, and simultaneously has to compensate for the effects of the contact forces arising between the tool and the workpiece depending on the needs of the actual technological operation. Balancing the effects of a priori unknown external disturbance forces and the inaccuracies of the available dynamic model of the robot is also important. Technological tasks requiring well prescribed end effector trajectories and contact forces simultaneously are challenging control problems that can be tackled in various manners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a computational tool (PHEx) developed in Excel VBA for solving sizing and rating design problems involving Chevron type plate heat exchangers (PHE) with 1-pass-1-pass configuration. The rating methodology procedure used in the program is outlined, and a case study is presented with the purpose to show how the program can be used to develop sensitivity analysis to several dimensional parameters of PHE and to observe their effect on transferred heat and pressure drop.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mechanical Systems and Signal Processing, Vol.22, Number 6

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este artigo apresenta uma nova abordagem (MM-GAV-FBI), aplicável ao problema da programação de projectos com restrições de recursos e vários modos de execução por actividade, problema conhecido na literatura anglo-saxónica por MRCPSP. Cada projecto tem um conjunto de actividades com precedências tecnológicas definidas e um conjunto de recursos limitados, sendo que cada actividade pode ter mais do que um modo de realização. A programação dos projectos é realizada com recurso a um esquema de geração de planos (do inglês Schedule Generation Scheme - SGS) integrado com uma metaheurística. A metaheurística é baseada no paradigma dos algoritmos genéticos. As prioridades das actividades são obtidas a partir de um algoritmo genético. A representação cromossómica utilizada baseia-se em chaves aleatórias. O SGS gera planos não-atrasados. Após a obtenção de uma solução é aplicada uma melhoria local. O objectivo da abordagem é encontrar o melhor plano (planning), ou seja, o plano que tenha a menor duração temporal possível, satisfazendo as precedências das actividades e as restrições de recursos. A abordagem proposta é testada num conjunto de problemas retirados da literatura da especialidade e os resultados computacionais são comparados com outras abordagens. Os resultados computacionais validam o bom desempenho da abordagem, não apenas em termos de qualidade da solução, mas também em termos de tempo útil.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do grau de Doutor em Matemática na especialidade de Equações Diferenciais, pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear Algebra—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Linear Algebra. Vector spaces are presented first, and linear transformations are reviewed secondly. Matrices and Linear systems are presented. Determinants and Basic geometry are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present generation of eLearning platforms values the interchange of learning objects standards. Nevertheless, for specialized domains these standards are insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. To address this issue we extended an existing learning objects standard to the particular requirements of a specialized domain, namely the automatic evaluation of programming problems. The focus of this paper is the definition of programming problems as learning objects. We introduce a new schema to represent metadata related to automatic evaluation that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements of the evaluation engine; or the roles of different assets - tests cases, program solutions, etc. This new schema is being used in an interoperable repository of learning objects, called crimsonHex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standards for learning objects focus primarily on content presentation. They were already extended to support automatic evaluation but it is limited to exercises with a predefined set of answers. The existing standards lack the metadata required by specialized evaluators to handle types of exercises with an indefinite set of solutions. To address this issue we extended existing learning object standards to the particular requirements of a specialized domain. We present a definition of programming problems as learning objects that is compatible both with Learning Management Systems and with systems performing automatic evaluation of programs. The proposed definition includes metadata that cannot be conveniently represented using existing standards, such as: the type of automatic evaluation; the requirements of the valuation engine; and the roles of different assets - tests cases, program solutions, etc. We present also the EduJudge project and its main services as a case study on the use of the proposed definition of programming problems as learning objects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia