14 resultados para SOLUTION-PHASE APPROACH

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho de projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have calculated the equilibrium shape of the axially symmetric Plateau border along which a spherical bubble contacts a flat wall, by analytically integrating Laplace's equation in the presence of gravity, in the limit of small Plateau border sizes. This method has the advantage that it provides closed-form expressions for the positions and orientations of the Plateau border surfaces. Results are in very good overall agreement with those obtained from a numerical solution procedure, and are consistent with experimental data. In particular we find that the effect of gravity on Plateau border shape is relatively small for typical bubble sizes, leading to a widening of the Plateau border for sessile bubbles and to a narrowing for pendant bubbles. The contact angle of the bubble is found to depend even more weakly on gravity. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high performance monolithic RF front-ends requires innovative RF circuit design to make the best of a good technology. A fully differential approach is usually preferred, due to its well-known properties. Although the differential approach must be preserved inside the chip, there are cases where the input signal is single-ended such as RF image filters and IF filters in a RF receiver. In these situations, a stage able to convert single-ended into differential signals (balun) is needed. The most cited topology, which is capable of providing high gain, consists on a differential stage with one of the two inputs grounded. Unfortunately, this solution has some drawbacks when implemented monolithically. This work presents the design and simulated results of an innovative high-performance monolithic single to differential converter, which overcomes the limitations of the circuits.The integration of the monolithic active balun circuit with an LNA on a 0.18μm CMOS process is also reported. The circuits presented here are aimed at 802.11a. Section 2 describes the balun circuit and section 3 presents its performance when it is connected to a conventional single-ended LNA. Section 4 shows the simulated performance results focused at phase/amplitude balance and noise figure. Finally, the last section draws conclusions and future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the literature, concepts of “polyneuropathy”, “peripheral neuropathy” and “neuropathy” are often mistakenly used as synonyms. Polyneuropathy is a specific term that refers to a relatively homogenous process that affects multiple peripheral nerves. Most of these tend to present as symmetric polyneuropathies that first manifest in the distal portions of the affected nerves. Many of these distal symmetric polyneuropathies are due to toxic-metabolic causes such as alcohol abuse and diabetes mellitus. Other distal symmetric polyneuropathies may result from an overproduction of substances that result in nerve pathology such as is observed in anti-MAG neuropathy and monoclonal gammopathy of undetermined significance. Other “overproduction” disorders are hereditary such as noted in the Portuguese type of familial amyloid polyneuropathy (FAP). FAP is a manifestation of a group of hereditary amyloidoses; an autosomal dominant, multisystemic disorder wherein the mutant amyloid precursor, transthyretin, is produced in excess primarily by the liver. The liver accounts for approximately 98% of all transthyretin production. FAP is confirmed by detecting a transthyretin variant with a methionine for valine substitution at position 30 [TTR (Met30)]. Familial Amyloidotic Polyneuropathy (FAP) – Portuguese type was first described by a Portuguese neurologist, Corino de Andrade in 1939 and published in 1951. Most persons with this disorder are descended from Portuguese sailors who sired offspring in various locations, primarily in Sweden, Japan and Mallorca. Their descendants emigrated worldwide such that this disorder has been reported in other countries as well. More than 2000 symptomatic cases have been reported in Portugal. FAP progresses rapidly with an average time course from symptom onset to multi-organ involvement and death between ten and twenty years. Treatments directed at removing this aberrant protein such as plasmapheresis and immunoadsorption proved to be unsuccessful. Liver transplantation has been the only effective solution as evidenced by almost 2000 liver transplants performed worldwide. A therapy for FAP with a novel agent, “Tafamidis” has shown some promise in ongoing phase III clinical trials. It is well recognized that regular physical activity of moderate intensity has a positive effect on physical fitness as gauged by body composition, aerobic capacity, muscular strength and endurance and flexibility. Physical fitness has been reported to result in the reduction of symptoms and lesser impairment when performing activities of daily living. Exercise has been advocated as part of a comprehensive approach to the treatment of chronic diseases. Therefore, this chapter concludes with a discussion of the role of exercise training on FAP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solution enthalpies of 1,4-dioxane have been obtained in 15 protic and aprotic solvents at 298.15 K. Breaking the overall process through the use of Solomonov's methodology the cavity term was calculated and interaction enthalpies (Delta H-int) were determined. Main factors involved in the interaction enthalpy have been identified and quantified using a QSPR approach based on the TAKA model equation. The relevant descriptors were found to be pi* and beta, which showed, respectively, exothermic and endothermic contributions. The magnitude of pi* coefficient points toward non-specific solute-solvent interactions playing a major role in the solution process. The positive value of the beta coefficient reflects the endothermic character of the solvents' hydrogen bond acceptor (HBA) basicity contribution, indicating that solvent molecules engaged in hydrogen bonding preferentially interact with each other rather than with 1,4-dioxane. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When a mixture is confined, one of the phases can condense out. This condensate, which is otherwise metastable in the bulk, is stabilized by the presence of surfaces. In a sphere-plane geometry, routinely used in atomic force microscope and surface force apparatus, it, can form a bridge connecting the surfaces. The pressure drop in the bridge gives rise to additional long-range attractive forces between them. By minimizing the free energy of a binary mixture we obtain the force-distance curves as well as the structural phase diagram of the configuration with the bridge. Numerical results predict a discontinuous transition between the states with and without the bridge and linear force-distance curves with hysteresis. We also show that similar phenomenon can be observed in a number of different systems, e.g., liquid crystals and polymer mixtures. (C). 2004 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding the structure of a confined liquid crystal is a difficult task since both the density and order parameter profiles are nonuniform. Starting from a microscopic model and density-functional theory, one has to either (i) solve a nonlinear, integral Euler-Lagrange equation, or (ii) perform a direct multidimensional free energy minimization. The traditional implementations of both approaches are computationally expensive and plagued with convergence problems. Here, as an alternative, we introduce an unsupervised variant of the multilayer perceptron (MLP) artificial neural network for minimizing the free energy of a fluid of hard nonspherical particles confined between planar substrates of variable penetrability. We then test our algorithm by comparing its results for the structure (density-orientation profiles) and equilibrium free energy with those obtained by standard iterative solution of the Euler-Lagrange equations and with Monte Carlo simulation results. Very good agreement is found and the MLP method proves competitively fast, flexible, and refinable. Furthermore, it can be readily generalized to the richer experimental patterned-substrate geometries that are now experimentally realizable but very problematic to conventional theoretical treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Eletrónica e Telecomunicações

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In video communication systems, the video signals are typically compressed and sent to the decoder through an error-prone transmission channel that may corrupt the compressed signal, causing the degradation of the final decoded video quality. In this context, it is possible to enhance the error resilience of typical predictive video coding schemes using as inspiration principles and tools from an alternative video coding approach, the so-called Distributed Video Coding (DVC), based on the Distributed Source Coding (DSC) theory. Further improvements in the decoded video quality after error-prone transmission may also be obtained by considering the perceptual relevance of the video content, as distortions occurring in different regions of a picture have a different impact on the user's final experience. In this context, this paper proposes a Perceptually Driven Error Protection (PDEP) video coding solution that enhances the error resilience of a state-of-the-art H.264/AVC predictive video codec using DSC principles and perceptual considerations. To increase the H.264/AVC error resilience performance, the main technical novelties brought by the proposed video coding solution are: (i) design of an improved compressed domain perceptual classification mechanism; (ii) design of an improved transcoding tool for the DSC-based protection mechanism; and (iii) integration of a perceptual classification mechanism in an H.264/AVC compliant codec with a DSC-based error protection mechanism. The performance results obtained show that the proposed PDEP video codec provides a better performing alternative to traditional error protection video coding schemes, notably Forward Error Correction (FEC)-based schemes. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finding the structure of a confined liquid crystal is a difficult task since both the density and order parameter profiles are nonuniform. Starting from a microscopic model and density-functional theory, one has to either (i) solve a nonlinear, integral Euler-Lagrange equation, or (ii) perform a direct multidimensional free energy minimization. The traditional implementations of both approaches are computationally expensive and plagued with convergence problems. Here, as an alternative, we introduce an unsupervised variant of the multilayer perceptron (MLP) artificial neural network for minimizing the free energy of a fluid of hard nonspherical particles confined between planar substrates of variable penetrability. We then test our algorithm by comparing its results for the structure (density-orientation profiles) and equilibrium free energy with those obtained by standard iterative solution of the Euler-Lagrange equations and with Monte Carlo simulation results. Very good agreement is found and the MLP method proves competitively fast, flexible, and refinable. Furthermore, it can be readily generalized to the richer experimental patterned-substrate geometries that are now experimentally realizable but very problematic to conventional theoretical treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of transient dynamical phenomena near bifurcation thresholds has attracted the interest of many researchers due to the relevance of bifurcations in different physical or biological systems. In the context of saddle-node bifurcations, where two or more fixed points collide annihilating each other, it is known that the dynamics can suffer the so-called delayed transition. This phenomenon emerges when the system spends a lot of time before reaching the remaining stable equilibrium, found after the bifurcation, because of the presence of a saddle-remnant in phase space. Some works have analytically tackled this phenomenon, especially in time-continuous dynamical systems, showing that the time delay, tau, scales according to an inverse square-root power law, tau similar to (mu-mu (c) )(-1/2), as the bifurcation parameter mu, is driven further away from its critical value, mu (c) . In this work, we first characterize analytically this scaling law using complex variable techniques for a family of one-dimensional maps, called the normal form for the saddle-node bifurcation. We then apply our general analytic results to a single-species ecological model with harvesting given by a unimodal map, characterizing the delayed transition and the scaling law arising due to the constant of harvesting. For both analyzed systems, we show that the numerical results are in perfect agreement with the analytical solutions we are providing. The procedure presented in this work can be used to characterize the scaling laws of one-dimensional discrete dynamical systems with saddle-node bifurcations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Evidence Accumulation Clustering (EAC) paradigm is a clustering ensemble method which derives a consensus partition from a collection of base clusterings obtained using different algorithms. It collects from the partitions in the ensemble a set of pairwise observations about the co-occurrence of objects in a same cluster and it uses these co-occurrence statistics to derive a similarity matrix, referred to as co-association matrix. The Probabilistic Evidence Accumulation for Clustering Ensembles (PEACE) algorithm is a principled approach for the extraction of a consensus clustering from the observations encoded in the co-association matrix based on a probabilistic model for the co-association matrix parameterized by the unknown assignments of objects to clusters. In this paper we extend the PEACE algorithm by deriving a consensus solution according to a MAP approach with Dirichlet priors defined for the unknown probabilistic cluster assignments. In particular, we study the positive regularization effect of Dirichlet priors on the final consensus solution with both synthetic and real benchmark data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.