14 resultados para 177-1091D
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Um dos pressupostos deste trabalho de projecto é de cartografar o trabalho prático de encenação realizado a partir do Auto da Barca do Inferno de Gil Vicente encenado para o Teatro Universitário de Coimbra (T.E.U.C) em 2010 e que constitui a matéria de análise do ensaio aqui sugerido. Neste ensaio levanto algumas questões que alimentaram o processo de criação, e que surgiram do confronto da encenação com um texto clássico, como se poderá verificar no primeiro capítulo. Posteriormente, no segundo capítulo, documento o período de preparação anterior aos ensaios. Finalmente, no terceiro capítulo, defino quais as lógicas que estiveram por trás do processo e que levaram à criação do espectáculo, através da análise à documentação feita diarísticamente no caderno de encenação que dá conta do processo de ensaios até à concretização do espectáculo.
Resumo:
Este artigo resulta de um trabalho de investigação realizado no âmbito da tese de licenciatura em Sociologia1, e nele se pretende dar conta dos resultados empíricos sobre o processo de institucionalização do Instituto Português de Oncologia (IPO). Através de um exercício de enquadramento histórico-sociológico sobre a emergência do cancro como uma doença socialmente representada como o mal absoluto, e tendo como uma das principais plataformas teóricas o construtivismo fenomenológico, procurar-se-á compreender de que modo o cancro se cristalizou como um dos mais graves problemas de Saúde Pública das sociedades contemporâneas, analisando para o efeito o caso concreto de Portugal, através da análise do processo de institucionalização do IPO, ocorrido, tal como em outros contextos, no dealbar do século XX.
Resumo:
How does the construction of proof relate to the social practice developed in the mathematics classroom? This report addresses the role of diagrams in order to focus the complementarity of participation and reification in the process of constructing a proof and negotiating its meaning. The discussion is based on the analysis of the mathematical practice developed by a group of four 9th grade students and is inspired by the social theory of learning
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
This work addresses the present-day (<100 ka) mantle heterogeneity in the Azores region through the study of two active volcanic systems from Terceira Island. Our study shows that mantle heterogeneities are detectable even when "coeval" volcanic systems (Santa Barbara and Fissural) erupted less than 10 km away. These volcanic systems, respectively, reflect the influence of the Terceira and D. Joao de Castro Bank end-members defined by Beier et at (2008) for the Terceira Rift Santa Barbara magmas are interpreted to be the result of mixing between a HIMU-type component, carried to the upper mantle by the Azores plume, and the regional depleted MORB magmas/source. Fissural lavas are characterized by higher Ba/Nb and Nb/U ratios and less radiogenic Pb-206/Pb-204, Nd-143/Nd-144 and Hf-176/Hf-177, requiring the small contribution of delaminated sub-continental lithospheric mantle residing in the upper mantle. Published noble gas data on lavas from both volcanic systems also indicate the presence of a relatively undegassed component, which is interpreted as inherited from a lower mantle reservoir sampled by the ascending Azores plume. As inferred from trace and major elements, melting began in the garnet stability field, while magma extraction occurred within the spinel zone. The intra-volcanic system's chemical heterogeneity is mainly explained by variable proportions of the above-mentioned local end-members and by crystal fractionation processes. (C) 2011 Elsevier By. All rights reserved.
Resumo:
Neste artigo analisa-se o estado actual da representação do conhecimento profissional específico, sustentador e legitimador do exercício da docência. Discute-se num primeiro momento a sua natureza e construção social e histórica e algumas das ambiguidades decorrentes da dispersão dos referenciais do seu reconhecimento social. Problematizam-se, a partir desse quadro, alguns factores que influenciam a representação dominante desse saber, nomeadamente apresentando dados de pesquisas de doutoramento em curso, desenvolvidas pelas autoras em universidades de Portugal e Espanha. Essas pesquisas incidem sobre (1) o estudo do lugar atribuído à produção de conhecimento através da investigação, em programas de formação inicial, e características de que se reveste o conhecimento produzido nesse âmbito (2) a análise de dimensões da representação social que o reflectem e se traduzem,entre outros aspectos, nos formatos particulares da certificação para a docência, aqui referenciados ao caso português e (3) o estudo da percepção da construção do conhecimento profissional de docentes e futuros docentes envolvidos em processos formativos centrados na reflexão sobre as práticas e em práticas supervisivas. O conhecimento profissional docente constitui-se hoje como uma área de pesquisa relevante, particularmente na ressignificação da profissionalidade docente face ao universo de complexidade e heterogeneidade que caracteriza as sociedades que escola e professores são supostos servir.
Resumo:
Recensão crítica do livro "PALACIOS CEREZALES, Diego - O poder caiu na rua: crise de estado e acções colectivas na revolução portuguesa. Lisboa: Imprensa de Ciências Sociais, 2003”.
Resumo:
Objective - The adjusted effect of long-chain polyunsaturated fatty acid (LCPUFA) intake during pregnancy on adiposity at birth of healthy full-term appropriate-for-gestational age neonates was evaluated. Study Design - In a cross-sectional convenience sample of 100 mother and infant dyads, LCPUFA intake during pregnancy was assessed by food frequency questionnaire with nutrient intake calculated using Food Processor Plus. Linear regression models for neonatal body composition measurements, assessed by air displacement plethysmography and anthropometry, were adjusted for maternal LCPUFA intakes, energy and macronutrient intakes, prepregnancy body mass index and gestational weight gain. Result - Positive associations between maternal docosahexaenoic acid intake and ponderal index in male offspring (β=0.165; 95% confidence interval (CI): 0.031–0.299; P=0.017), and between n-6:n-3 LCPUFA ratio intake and fat mass (β=0.021; 95% CI: 0.002–0.041; P=0.034) and percentage of fat mass (β=0.636; 95% CI: 0.125–1.147; P=0.016) in female offspring were found. Conclusion - Using a reliable validated method to assess body composition, adjusted positive associations between maternal docosahexaenoic acid intake and birth size in male offspring and between n-6:n-3 LCPUFA ratio intake and adiposity in female offspring were found, suggesting that maternal LCPUFA intake strongly influences fetal body composition.
Resumo:
As transições democráticas colocam as elites e a sociedade perante o desafio de enfrentar os legados dos regimes ditatoriais e a portuguesa foi particularmente importante sob este ponto de vista, quer pela longa duração do regime autoritário quer pela natureza de ruptura no tipo de mudança de regime. Acresce que o caso português, enquanto o primeiro da chamada “terceira vaga” teve escassos modelos de inspiração e nenhum de contágio foi, como alguém a definiu recentemente uma experiência, de “democracia depois da guerra”, onde os militares desempenharam um papel determinante no derrube da Ditadura (Bermeo: 2004). Assim, este artigo propõe-se analisar os saneamentos levados a cabo no Ministério da Justiça, dando particular atenção à liquidação dos tribunais políticos do salazarismo (Tribunais Criminais Plenários - TCP).
Resumo:
Antioneoplastic drugs are widely used in treatment of cancer, and several studies suggest acute and long-term effects associated to antineoplastic drug exposures, namely associating workplace exposure with health effects. Cytokinesis blocked micronucleus (CBMN) assay is one promising short-term genotoxicity assays for human risk assessment and their combination is recommended to monitor populations chronically exposed to genotoxic agents. The aim of this investigation is the genotoxicity assessment in different professionals that handle cytostatics drugs. This research is case-control blinded study constituted by 46 non-exposed subjects and 44 workers that handle antineoplastic drugs, such as pharmacists, pharmacy technicians, and nurses. It was found statistically significant increases in the genotoxicity biomarkers in exposed comparising with controls (p<0.05). The findings address the need for regular biomonitoring of personnel occupationally exposed to these drugs, confirming to an enhanced health risk assessment.
Resumo:
Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão em Saúde
Resumo:
Projeto promovido pela Sociedade Portuguesa de Matemática e e pela Fundação Calouste Gulbenkian.
Resumo:
In cluster analysis, it can be useful to interpret the partition built from the data in the light of external categorical variables which are not directly involved to cluster the data. An approach is proposed in the model-based clustering context to select a number of clusters which both fits the data well and takes advantage of the potential illustrative ability of the external variables. This approach makes use of the integrated joint likelihood of the data and the partitions at hand, namely the model-based partition and the partitions associated to the external variables. It is noteworthy that each mixture model is fitted by the maximum likelihood methodology to the data, excluding the external variables which are used to select a relevant mixture model only. Numerical experiments illustrate the promising behaviour of the derived criterion.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.