941 resultados para Nonlinear functional analysis
Resumo:
The fractal geometry is used to model of a naturally fractured reservoir and the concept of fractional derivative is applied to the diffusion equation to incorporate the history of fluid flow in naturally fractured reservoirs. The resulting fractally fractional diffusion (FFD) equation is solved analytically in the Laplace space for three outer boundary conditions. The analytical solutions are used to analyze the response of a naturally fractured reservoir considering the anomalous behavior of oil production. Several synthetic examples are provided to illustrate the methodology proposed in this work and to explain the diffusion process in fractally fractured systems.
Resumo:
Fractional dynamics is a growing topic in theoretical and experimental scientific research. A classical problem is the initialization required by fractional operators. While the problem is clear from the mathematical point of view, it constitutes a challenge in applied sciences. This paper addresses the problem of initialization and its effect upon dynamical system simulation when adopting numerical approximations. The results are compatible with system dynamics and clarify the formulation of adequate values for the initial conditions in numerical simulations.
Resumo:
In today’s healthcare paradigm, optimal sedation during anesthesia plays an important role both in patient welfare and in the socio-economic context. For the closed-loop control of general anesthesia, two drugs have proven to have stable, rapid onset times: propofol and remifentanil. These drugs are related to their effect in the bispectral index, a measure of EEG signal. In this paper wavelet time–frequency analysis is used to extract useful information from the clinical signals, since they are time-varying and mark important changes in patient’s response to drug dose. Model based predictive control algorithms are employed to regulate the depth of sedation by manipulating these two drugs. The results of identification from real data and the simulation of the closed loop control performance suggest that the proposed approach can bring an improvement of 9% in overall robustness and may be suitable for clinical practice.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Fractional dynamics is a growing topic in theoretical and experimental scientific research. A classical problem is the initialization required by fractional operators. While the problem is clear from the mathematical point of view, it constitutes a challenge in applied sciences. This paper addresses the problem of initialization and its effect upon dynamical system simulation when adopting numerical approximations. The results are compatible with system dynamics and clarify the formulation of adequate values for the initial conditions in numerical simulations.
Resumo:
In today’s healthcare paradigm, optimal sedation during anesthesia plays an important role both in patient welfare and in the socio-economic context. For the closed-loop control of general anesthesia, two drugs have proven to have stable, rapid onset times: propofol and remifentanil. These drugs are related to their effect in the bispectral index, a measure of EEG signal. In this paper wavelet time–frequency analysis is used to extract useful information from the clinical signals, since they are time-varying and mark important changes in patient’s response to drug dose. Model based predictive control algorithms are employed to regulate the depth of sedation by manipulating these two drugs. The results of identification from real data and the simulation of the closed loop control performance suggest that the proposed approach can bring an improvement of 9% in overall robustness and may be suitable for clinical practice.
Resumo:
This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.
Resumo:
Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.
Resumo:
A new method for the study and optimization of manu«ipulator trajectories is developed. The novel feature resides on the modeling formulation. Standard system desciptions are based on a set of differential equations which, in general, require laborious computations and may be difficult to analyze. Moreover, the derived algorithms are suited to "deterministic" tasks, such as those appearing in a repetitivework, and are not well adapted to a "random" operation that occurs in intelligent systems interacting with a non-structured and changing environment. These facts motivate the development of alternative models based on distinct concepts. The proposed embedding of statistics and Fourier trasnform gives a new perspective towards the calculation and optimization of the robot trajectories in manipulating tasks.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
Previously we have presented a model for generating human-like arm and hand movements on an unimanual anthropomorphic robot involved in human-robot collaboration tasks. The present paper aims to extend our model in order to address the generation of human-like bimanual movement sequences which are challenged by scenarios cluttered with obstacles. Movement planning involves large scale nonlinear constrained optimization problems which are solved using the IPOPT solver. Simulation studies show that the model generates feasible and realistic hand trajectories for action sequences involving the two hands. The computational costs involved in the planning allow for real-time human robot-interaction. A qualitative analysis reveals that the movements of the robot exhibit basic characteristics of human movements.
Resumo:
RESUMO: Introdução: A espondilite anquilosante (EA) é uma doença inflamatória crónica caracterizada pela inflamação das articulações sacroilíacas e da coluna. A anquilose progressiva motiva uma deterioração gradual da função física e da qualidade de vida. O diagnóstico e o tratamento precoces podem contribuir para um melhor prognóstico. Neste contexto, a identificação de biomarcadores, assume-se como sendo muito útil para a prática clínica e representa hoje um grande desafio para a comunidade científica. Objetivos: Este estudo teve como objetivos: 1 - caracterizar a EA em Portugal; 2 - investigar possíveis associações entre genes, MHC e não-MHC, com a suscetibilidade e as características fenotípicas da EA; 3 - identificar genes candidatos associados a EA através da tecnologia de microarray. Material e Métodos: Foram recrutados doentes com EA, de acordo com os critérios modificados de Nova Iorque, nas consultas de Reumatologia dos diferentes hospitais participantes. Colecionaram-se dados demográficos, clínicos e radiológicos e colhidas amostras de sangue periférico. Selecionaram-se de forma aleatória, doentes HLA-B27 positivos, os quais foram tipados em termos de HLA classe I e II por PCR-rSSOP. Os haplótipos HLA estendidos foram estimados pelo algoritmo Expectation Maximization com recurso ao software Arlequin v3.11. As variantes alélicas dos genes IL23R, ERAP1 e ANKH foram estudadas através de ensaios de discriminação alélica TaqMan. A análise de associação foi realizada utilizando testes da Cochrane-Armitage e de regressão linear, tal como implementado pelo PLINK, para variáveis qualitativas e quantitativas, respetivamente. O estudo de expressão génica foi realizado por Illumina HT-12 Whole-Genome Expression BeadChips. Os genes candidatos foram validados usando qPCR-based TaqMan Low Density Arrays (TLDAs). Resultados: Foram incluídos 369 doentes (62,3% do sexo masculino, com idade média de 45,4 ± 13,2 anos, duração média da doença de 11,4 ± 10,5 anos). No momento da avaliação, 49,9% tinham doença axial, 2,4% periférica, 40,9% mista e 7,1% entesopática. A uveíte anterior aguda (33,6%) foi a manifestação extra-articular mais comum. Foram positivos para o HLA-B27, 80,3% dos doentes. Os haplótipo A*02/B*27/Cw*02/DRB1*01/DQB1*05 parece conferir suscetibilidade para a EA, e o A*02/B*27/Cw*01/DRB1*08/DQB1*04 parece conferir proteção em termos de atividade, repercussão funcional e radiológica da doença. Três variantes (2 para IL23R e 1 para ERAP1) mostraram significativa associação com a doença, confirmando a associação destes genes com a EA na população Portuguesa. O mesmo não se verificou com as variantes estudadas do ANKH. Não se verificou associação entre as variantes génicas não-MHC e as manifestações clínicas da EA. Foi identificado um perfil de expressão génica para a EA, tendo sido validados catorze genes - alguns têm um papel bem documentado em termos de inflamação, outros no metabolismo da cartilagem e do osso. Conclusões: Foi estabelecido um perfil demográfico e clínico dos doentes com EA em Portugal. A identificação de variantes génicas e de um perfil de expressão contribuem para uma melhor compreensão da sua fisiopatologia e podem ser úteis para estabelecer modelos com relevância em termos de diagnóstico, prognóstico e orientação terapêutica dos doentes. -----------ABSTRACT: Background: Ankylosing Spondylitis (AS) is a chronic inflammatory disorder characterized by inflammation in the spine and sacroiliac joints leading to progressive joint ankylosis and in progressive deterioration of physical function and quality of life. An early diagnosis and early therapy may contribute to a better prognosis. The identification of biomarkers would be helpful and represents a great challenge for the scientific community. Objectives: The present study had the following aims: 1- to characterize the pattern of AS in Portuguese patients; 2- to investigate MHC and non-MHC gene associations with susceptibility and phenotypic features of AS and; 3- to identify candidate genes associated with AS by means of whole-genome microarray. Material and Methods: AS was defined in accordance to the modified New York criteria and AS cases were recruited from hospital outcares patient clinics. Demographic and clinical data were recorded and blood samples collected. A random group of HLA-B27 positive patients and controls were selected and typed for HLA class I and II by PCR-rSSOP. The extended HLA haplotypes were estimated by Expectation Maximization Algorithm using Arlequin v3.11 software. Genotyping of IL23R, ERAP1 and ANKH allelic variants was carried out with TaqMan allelic discrimination assays. Association analysis was performed using the Cochrane-Armitage and linear regression tests as implemented in PLINK, for dichotomous and quantitative variables, respectively. Gene expression profile was carried out using Illumina HT-12 Whole-Genome Expression BeadChips and candidate genes were validated using qPCR-based TaqMan Low Density Arrays (TLDAs). Results: A total of 369 patients (62.3% male; mean age 45.4±13.2 years; mean disease duration 11.4±10.5 years), were included. Regarding clinical disease pattern, at the time of assessment, 49.9% had axial disease, 2.4% peripheral disease, 40.9% mixed disease and 7.1% isolated enthesopathic disease. Acute anterior uveitis (33.6%) was the most common extra-articular manifestation. 80.3% of AS patients were HLA-B27 positive. The haplotype A*02/B*27/Cw*02/DRB1*01/DQB1*05 seems to confer susceptibility to AS, whereas A*02/B*27/Cw*01/DRB1*08/DQB1*04 seems to provide protection in terms of disease activity, functional and radiological repercussion. Three markers (two for IL23R and one for ERAP1) showed significant single-locus disease associations. Association of these genes with AS in the Portuguese population was confirmed, whereas ANKH markers studied did not show an association with AS. No association was seen between non-MHC genes and clinical manifestations of AS. A gene expression signature for AS was established; among the fourteen validated genes, a number of them have a well-documented inflammatory role or in modulation of cartilage and bone metabolism. Conclusions: A demographic and clinical profile of patients with AS in Portugal was established. Identification of genetic variants of target genes as well as gene expression signatures could provide a better understanding of AS pathophysiology and could be useful to establish models with relevance in terms of susceptibility, prognosis, and potential therapeutic guidance.
Resumo:
The hypoxia inducible factor 1 alpha (HIF1a) is a key regulator of tumour cell response to hypoxia, orchestrating mechanisms known to be involved in cancer aggressiveness and metastatic behaviour. In this study we sought to evaluate the association of a functional genetic polymorphism in HIF1A with overall and metastatic prostate cancer (PCa) risk and with response to androgen deprivation therapy (ADT). The HIF1A +1772 C>T (rs11549465) polymorphism was genotyped, using DNA isolated from peripheral blood, in 1490 male subjects (754 with prostate cancer and 736 controls cancer-free) through Real-Time PCR. A nested group of cancer patients who were eligible for androgen deprivation therapy was followed up. Univariate and multivariate models were used to analyse the response to hormonal treatment and the risk for developing distant metastasis. Age-adjusted odds ratios were calculated to evaluate prostate cancer risk. Our results showed that patients under ADT carrying the HIF1A +1772 T-allele have increased risk for developing distant metastasis (OR, 2.0; 95%CI, 1.1-3.9) and an independent 6-fold increased risk for resistance to ADT after multivariate analysis (OR, 6.0; 95%CI, 2.2-16.8). This polymorphism was not associated with increased risk for being diagnosed with prostate cancer (OR, 0.9; 95%CI, 0.7-1.2). The HIF1A +1772 genetic polymorphism predicts a more aggressive prostate cancer behaviour, supporting the involvement of HIF1a in prostate cancer biological progression and ADT resistance. Molecular profiles using hypoxia markers may help predict clinically relevant prostate cancer and response to ADT.
Resumo:
Manipulator systems are rather complex and highly nonlinear which makes difficult their analysis and control. Classic system theory is veil known, however it is inadequate in the presence of strong nonlinear dynamics. Nonlinear controllers produce good results [1] and work has been done e. g. relating the manipulator nonlinear dynamics with frequency response [2–5]. Nevertheless, given the complexity of the problem, systematic methods which permit to draw conclusions about stability, imperfect modelling effects, compensation requirements, etc. are still lacking. In section 2 we start by analysing the variation of the poles and zeros of the descriptive transfer functions of a robot manipulator in order to motivate the development of more robust (and computationally efficient) control algorithms. Based on this analysis a new multirate controller which is an improvement of the well known “computed torque controller” [6] is announced in section 3. Some research in this area was done by Neuman [7,8] showing tbat better robustness is possible if the basic controller structure is modified. The present study stems from those ideas, and attempts to give a systematic treatment, which results in easy to use standard engineering tools. Finally, in section 4 conclusions are presented.