48 resultados para Process capability index
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
A pesquisa sobre resiliência sugere que a criança que se desenvolve em contexto adverso, poderá usufruir de atributos relevantes, pessoais e do ambiente. Neste sentido pretendeu-se estudar, até que ponto, as competências de modulação sensorial da criança e a qualidade das interacções mãe-filho, influenciavam as trajectórias de risco e podiam promover as oportunidades de resiliência da criança. Participaram no estudo 136 crianças, 67 do sexo feminino e 69 do sexo masculino, com idades entre os 7 e os 36 meses. Analisámos a sensibilidade materna em situação de jogo livre recorrendo à escala CARE-Index e o processamento sensorial através do de entrevista baseado no protocolo de Dunn (1997) assente nos quatro padrões de processamento sensorial: baixo registo; sensibilidade sensorial; procura sensorial; evitamento sensorial, construto anteriormente validado. Constituímos, com base nas premissas do modelo de avaliação autêntica, um índex de capacidades, que nos serviu como referencial para a avaliação do risco e da resiliência. Os resultados indicaram que a resiliência infantil em ambiente de pobrezaestava associada a indicadores de elevada sensibilidade materna e a índices adequados de processamento sensorial. A discussão dos resultados enquadrou-se nos modelos actuais e emergentes das influências neurobiológicas e ambientais nos processos de risco e de resiliência.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Nowadays, the Portuguese insurance industry operates in a market with a much more aggressive structure than a few decades ago. Markets and the economy have become globalised since the last decade of the 20th century. Market forces have gradually shifted – power is now mainly on the demand side. In order to meet the new requirements, the insurance industry must develop a strong strategic ability to respond to constant changes of the new international economic order.One of the basic aspects of this strategic development will focus on the ability to predict the future. We introduce the subject by briefly describing the sector, its organisational structure in the Portuguese market, and challenges arising from the development of the European Union. We then analyse the economic and financial structure of the sector. From this point of view, we aim at the possibility of designing models that could explain the demand for insurance, claims and technical reserves evolution. Such models, (even if based on the past), would resolve, at least partly, one of the greatest difficulties experienced by insurance companies when estimating the budget. Thus, we examine the existence of variables that explain the previous points, which are capable of forming a basis for designing models that are simple but efficient, and can be used for strategic planning.
Resumo:
We provide all agent; the capability to infer the relations (assertions) entailed by the rules that, describe the formal semantics of art RDFS knowledge-base. The proposed inferencing process formulates each semantic restriction as a rule implemented within a, SPARQL query statement. The process expands the original RDF graph into a fuller graph that. explicitly captures the rule's described semantics. The approach is currently being explored in order to support descriptions that follow the generic Semantic Web Rule Language. An experiment, using the Fire-Brigade domain, a small-scale knowledge-base, is adopted to illustrate the agent modeling method and the inferencing process.
Resumo:
The exposure index (lgM) obtained from a radiographic image may be a useful feedback indicator to the radiographer about the appropriate exposure level in routine clinical practice. This study aims to evaluate lgM in orthopaedic radiography performed in the standard clinical environment. We analysed the lgM of 267 exposures performed with an AGFA CR system. The mean value of lgM in our sample is 2.14. A significant difference (P=0.000<0.05) from 1.96 lgM reference is shown. Data show that 72% of exposures are above the 1.96 lgM and 42% are above the limit of 2.26. Median values of lgM are above 1.96 and below 2.26 for Speed class (SC) 200 (2.16) and SC400 (2.13). The interquartile range is lower in SC400 than in SC200. Data seem to indicate that lgM values are above the manufacturer’s reference of 1.96. Departmental exposure charts should be optimised to reduce the dose given to patients.
Resumo:
Nesta tese é descrita a preparação de nanotubos de titanatos (TNT) via síntese hidrotérmica alcalina, usando uma nova metodologia que evita a utilização de TiO2 cristalino como precursor. Foi estudada a influência da substituição sódio/protão na estrutura, morfologia e propriedades ópticas dos materiais preparados. Os resultados mostraram que a substituição Na+ → H+ resulta numa redução na distância intercamadas dos TNTs, tendo sido medidos valores entre 1.13±0.03 nm e 0.70±0.02 nm para aquele parâmetro. O comportamento óptico dos TNTs foi estudado na região UV-vis, estimando-se um hiato óptico de energia 3.27±0.03 eV para a amostra com maior teor de sódio enquanto que para a amostra protonada foi determinado um valor de 2.81±0.02 eV. Estes valores mostram que a troca iónica Na+ → H+ teve influência no desvio da banda de absorção dos TNTs para a região do visível próximo. A actividade fotocatalítica dos TNTs na degradação do corante rodamina 6G (R6G) foi posteriormente estudada. Verificou-se que, apesar de a amostra com maior teor de sódio ter sido a que exibiu maior capacidade para adsorver o R6G, foi a amostra protonada que apresentou a actividade catalítica mais elevada na fotodegradação deste corante. Numa segunda fase, e com o objectivo de preparar novos materiais nanoestruturados fotosensíveis, procedeu-se à decoração dos TNTs protonados com semicondutores (SC) nanocristalinos usando um método novo. Para o efeito os TNTs foram decorados com nanocristalites de ZnS, CdS e Bi2S3. Foi estudada a influência do tipo de semicondutor na estrutura, morfologia e propriedades ópticas dos SC/TNTs obtidos. Verificou-se que, para qualquer dos semicondutores usados no processo de decoração, a estrutura dos TNTs é preservada e não ocorre segregação do SC. Verificou-se ainda que a morfologia dos nanocompósitos preparados depende fortemente da natureza do semicondutor. No que respeita ao comportamento óptico destes materiais, foram determinados hiatos ópticos de energia 3.67±0.03 eV, 2.47±0.03 eV e 1.35±0.01 eV para as amostras ZnS/TNT, CdS/TNT e Bi2S3/TNT, respectivamente. Estes resultados mostram que através do processo de decoração de TNTs com semicondutores podem ser preparados materiais nanocompósitos inovadores, com propriedades ópticas novas e/ou pré-definidas numa gama alargada do espectro electromagnético.
Resumo:
Com vista a revolucionar o sector das comunicações móveis, muito à custa dos elevados débitos prometidos, a tecnologia LTE recorre a uma técnica que se prevê que seja bastante utilizada nas futuras redes de comunicações móveis: Relaying. Juntamente com esta técnica, o LTE recorre à técnica MIMO, para melhorar a qualidade da transmissão em ambientes hostis e oferecer elevados ritmos de transmissão. No planeamento das próximas redes LTE, o recurso à técnica Relaying é frequente. Esta técnica, tem como objectivo aumentar a cobertura e/ou capacidade da rede, e ainda melhorar o seu desempenho em condições de fronteira de célula. A performance de uma RS depende da sua localização, das condições de propagação do canal rádio a que tanto a RS como o EU estão sujeitos, e ainda da capacidade que a RS tem de receber, processar e reencaminhar a informação. O objectivo da tese é estudar a relação existente entre o posicionamento de uma RS e o seu desempenho. Desta forma, pretende-se concluir qual a posição ideal de uma RS (tanto do tipo AF como SDF). Para além deste estudo, é apresentado um comparativo do desempenho dos modos MIMO TD e OL-SM, onde se conclui em que condições deverão ser utilizados, numa rede LTE equipada com FRSs.
Resumo:
This article presents the design and test of a receiver front end aimed at LMDS applications at 28.5 GHz. It presents a system-level design after which the receiver was designed. The receiver comprises an LNA, quadrature mixer and quadrature local oscillator. Experimental results at 24 GHz center frequency show a conversion voltage gain of 15 dB and conversion noise figure of 14 5 dB. The receiver operates from a 2 5 V power supply with a total current consumption of 31 mA.
Resumo:
In this paper we analyze the relationship between volatility in index futures markets and the number of open and closed positions. We observe that, although in general both positions are positively correlated with contemporaneous volatility, in the case of S&P 500, only the number of open positions has influence over the volatility. Additionally, we observe a stronger positive relationship on days characterized by extreme movements of these contracting movements dominating the market. Finally, our findings suggest that day-traders are not associated to an increment of volatility, whereas uninformed traders, both opening and closing their positions, have to do with it.
Resumo:
This article presents a Markov chain framework to characterize the behavior of the CBOE Volatility Index (VIX index). Two possible regimes are considered: high volatility and low volatility. The specification accounts for deviations from normality and the existence of persistence in the evolution of the VIX index. Since the time evolution of the VIX index seems to indicate that its conditional variance is not constant over time, I consider two different versions of the model. In the first one, the variance of the index is a function of the volatility regime, whereas the second version includes an autoregressive conditional heteroskedasticity (ARCH) specification for the conditional variance of the index.
Resumo:
The deposition of highly oriented a-axis CrO(2) films onto Al(2)O(3)(0001) by atmospheric pressure (AP)CVD at temperatures as low as 330 C is reported. Deposition rates strongly depend on the substrate temperature, whereas for film surface microstructures the dependence is mainly on film thickness. For the experimental conditions used in this work, CrO(2) growth kinetics are dominated by a surface reaction mechanism with an apparent activation energy of (121.0 +/- 4.3) kJ mol(-1). The magnitude and temperature dependence of the saturation magnetization, up to room temperature, is consistent with bulk measurements.
Resumo:
The subject matter of this book is about piano methodology, including technical, musical, artistic, ethical and philosophical issues and reflections. The purpose of this work is to share a personal professional experience insight in the field of piano performance. This text assumes a certain continuity to the major contributions of artists like Ludwig Deppe, Tobias Matthay, Grigory Kogan, Heinrich Neuhaus and George Kochevitsky. At the same time, it tries to integrate and complement this selected literature, bringing new ideas and hints to specific professional issues.
Resumo:
We describe the Lorenz links generated by renormalizable Lorenz maps with reducible kneading invariant (K(f)(-), = K(f)(+)) = (X, Y) * (S, W) in terms of the links corresponding to each factor. This gives one new kind of operation that permits us to generate new knots and links from the ones corresponding to the factors of the *-product. Using this result we obtain explicit formulas for the genus and the braid index of this renormalizable Lorenz knots and links. Then we obtain explicit formulas for sequences of these invariants, associated to sequences of renormalizable Lorenz maps with kneading invariant (X, Y) * (S,W)*(n), concluding that both grow exponentially. This is specially relevant, since it is known that topological entropy is constant on the archipelagoes of renormalization.
Resumo:
RESUMO: Objetivos – Determinar a sensibilidade e especificidade das ponderações Difusão (DWI) e T2 Fluid-Attenuated Inversion Recovery (FLAIR) na avaliação de lesões da substância branca (SB) e verificar em que medida se complementam, por forma a criar um conjunto de boas práticas na RM cranioencefálica de rotina. Metodologia – Recorrendo-se a uma metodologia quantitativa, efetuou-se uma análise retrospetiva da qual foram selecionados 30 pacientes, 10 sem patologia e 20 com patologia (2 com EM, 7 com Leucoencefalopatia, 6 com doença microangiopática e 5 com patologia da substância branca indefinida). Obteve-se uma amostra de 60 imagens, nomeadamente: 30 imagens ponderadas em DWI e 30 em T2 FLAIR. Recorrendo ao programa Viewdex®, três observadores avaliaram um conjunto de imagens segundo sete critérios: visibilidade, deteção, homogeneidade, localização, margens e dimensões da lesão e capacidade de diagnóstico. Com os resultados obtidos recorreu-se ao cálculo de sensibilidade e especificidade pelas Curvas ROC, bem como à análise estatística, nomeadamente, Teste-T, Índice de Concordância Kappa e coeficiente de correlação de Pearson entre as variáveis em estudo. Resultados – Os resultados de sensibilidade e de especificidade obtidos para a ponderação T2 FLAIR foram superiores (0,915 e 0,038, respetivamente) aos da ponderação DWI (0,08 e 0,100, respetivamente). Não se verificaram variâncias populacionais significativas. Obteve-se uma elevada correlação linear entre as variáveis com um valor r situado entre 0,8 e 0,99. Verificou-se também uma variabilidade considerável entre os observadores. Conclusões – Dados os baixos valores de sensibilidade e especificidade obtidos para a DWI, sugere-se que esta deva ser incluída no protocolo de rotina de crânio como auxiliar no diagnóstico diferencial com outras patologias.
Resumo:
Mestrado em Contabilidade