967 resultados para SPANNING PROBABILITY
Resumo:
This paper studies several topics related with the concept of “fractional” that are not directly related with Fractional Calculus, but can help the reader in pursuit new research directions. We introduce the concept of non-integer positional number systems, fractional sums, fractional powers of a square matrix, tolerant computing and FracSets, negative probabilities, fractional delay discrete-time linear systems, and fractional Fourier transform.
Resumo:
This paper shows several ways to analyse the performance of a safety barrier, depending on the objective to be achieved and present a method to analyse binary components usually present on sensor systems of safety barriers. An application example of a water-based fire system is presented and the Probability of Failure on Demand (PFD) of the sensor system is determined based on the analysis of pressure switches installed in this safety barrier. The knowledge of such information will allow the determination of safety barrier’s availability.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
A par das patologias oncológicas, as doenças do foro cardíaco, em particular a doença arterial coronária, são uma das principais causas de morte nos países industrializados, devido sobretudo, à grande incidência de enfartes do miocárdio. Uma das formas de diagnóstico e avaliação desta condição passa pela obtenção de imagens de perfusão miocárdica com radionuclídeos, realizada por Tomografia por Emissão de Positrões (PET). As soluções injectáveis de [15O]-H2O, [82Rb] e [13N]-NH3 são as mais utilizadas neste tipo de exame clínico. No Instituto de Ciências Nucleares Aplicadas à Saúde (ICNAS), a existência de um ciclotrão tem permitido a produção de uma variedade de radiofármacos, com aplicações em neurologia, oncologia e cardiologia. Recentemente, surgiu a oportunidade de iniciar exames clínicos com [13N]-NH3 para avaliação da perfusão miocárdica. É neste âmbito que surge a oportunidade do presente trabalho, pois antes da sua utilização clínica é necessário realizar a optimização da produção e a validação de todo o processo segundo as normas de Boas Práticas Radiofarmacêuticas. Após uma fase de optimização do processo, procedeu-se à avaliação dos parâmetros físico-químicos e biológicos da preparação de [13N]-NH3, de acordo com as indicações da Farmacopeia Europeia (Ph. Eur.) 8.2. De acordo com as normas farmacêuticas, foram realizados 3 lotes de produção consecutivos para validação da produção de [13N]-NH3. Os resultados mostraram um produto final límpido e ausente de cor, com valores de pH dentro do limite especificado, isto é, entre 4,5 e 8,5. A pureza química das amostras foi verificada, uma vez que relativamente ao teste colorimétrico, a tonalidade da cor da solução de [13N]-NH3 não era mais intensa que a solução de referência. As preparações foram identificadas como sendo [13N]-NH3, através dos resultados obtidos por cromatografia iónica, espectrometria de radiação gama e tempo de semi-vida. Por examinação do cromatograma obtido com a solução a ser testada, observou-se que o pico principal possuia um tempo de retenção aproximadamente igual ao pico do cromatograma obtido para a solução de referência. Além disso, o espectro de radiação gama mostrou um pico de energia 0,511 MeV e um outro adicional de 1,022 MeV para os fotões gama, característico de radionuclídeos emissores de positrões. O tempo de semi-vida manteve-se dentro do intervalo indicado, entre 9 e 11 minutos. Verificou-se, igualmente, a pureza radioquímica das amostras, correspondendo um mínimo de 99% da radioactividade total ao [13N], bem como a pureza radionuclídica, observando-se uma percentagem de impurezas inferiores a 1%, 2h após o fim da síntese. Os testes realizados para verificação da esterilidade e determinação da presença de endotoxinas bacterianas nas preparações de [13N]-NH3 apresentaram-se negativos.Os resultados obtidos contribuem, assim, para a validação do método para a produção de [13N]-NH3, uma vez que cumprem os requisitos especificados nas normas europeias, indicando a obtenção de um produto seguro e com a qualidade necessária para ser administrado em pacientes para avaliação da perfusão cardíaca por PET.
Resumo:
This paper presents the applicability of a reinforcement learning algorithm based on the application of the Bayesian theorem of probability. The proposed reinforcement learning algorithm is an advantageous and indispensable tool for ALBidS (Adaptive Learning strategic Bidding System), a multi-agent system that has the purpose of providing decision support to electricity market negotiating players. ALBidS uses a set of different strategies for providing decision support to market players. These strategies are used accordingly to their probability of success for each different context. The approach proposed in this paper uses a Bayesian network for deciding the most probably successful action at each time, depending on past events. The performance of the proposed methodology is tested using electricity market simulations in MASCEM (Multi-Agent Simulator of Competitive Electricity Markets). MASCEM provides the means for simulating a real electricity market environment, based on real data from real electricity market operators.
Resumo:
RESUMO - A exposição a formaldeído é reconhecidamente um dos mais importantes factores de risco presente nos laboratórios hospitalares de anatomia patológica. Neste contexto ocupacional, o formaldeído é utilizado em solução, designada comummente por formol. Trata-se de uma solução comercial de formaldeído, normalmente diluída a 10%, sendo pouco onerosa e, por esse motivo, a eleita para os trabalhos de rotina em anatomia patológica. A solução é utilizada como fixador e conservante do material biológico, pelo que as peças anatómicas a serem processadas são previamente impregnadas. No que concerne aos efeitos para a saúde do formaldeído, os efeitos locais parecem apresentar um papel mais importante comparativamente com os efeitos sistémicos, devido à sua reactividade e rápido metabolismo nas células da pele, tracto gastrointestinal e pulmões. Da mesma forma, a localização das lesões correspondem principalmente às zonas expostas às doses mais elevadas deste agente químico, ou seja, o desenvolvimento dos efeitos tóxicos dependerá mais da intensidade da dose externa do que da duração da exposição. O efeito do formaldeído no organismo humano mais facilmente detectável é a acção irritante, transitória e reversível sobre as mucosas dos olhos e aparelho respiratório superior (naso e orofaringe), o que acontece em geral para exposições frequentes e superiores a 1 ppm. Doses elevadas são citotóxicas e podem conduzir a degenerescência e necrose das mucosas e epitélios. No que concerne aos efeitos cancerígenos, a primeira avaliação efectuada pela International Agency for Research on Cancer data de 1981, actualizada em 1982, 1987, 1995 e 2004, considerando-o como um agente cancerígeno do grupo 2A (provavelmente carcinogénico). No entanto, a mais recente avaliação, em 2006, considera o formaldeído no Grupo 1 (agente carcinogénico) com base na evidência de que a exposição a este agente é susceptível de causar cancro nasofaríngeo em humanos. Constituiu objectivo principal deste estudo caracterizar a exposição profissional a formaldeído nos laboratórios hospitalares de anatomia patológica Portugueses. Pretendeu-se, ainda, descrever os fenómenos ambientais da contaminação ambiental por formaldeído e explorar eventuais associações entre variáveis. Considerou-se uma amostra de 10 laboratórios hospitalares de anatomia patológica, avaliada a exposição dos três grupos profissionais por comparação com os dois referenciais de exposição e, ainda, conhecidos os valores de concentração máxima em 83 actividades. Foram aplicados simultaneamente dois métodos distintos de avaliação ambiental: um dos métodos (Método 1) fez uso de um equipamento de leitura directa com o princípio de medição por Photo Ionization Detection, com uma lâmpada de 11,7 eV e, simultaneamente, realizou-se o registo da actividade. Este método disponibilizou dados para o referencial de exposição da concentração máxima; o outro método (Método 2) traduziu-se na aplicação do método NIOSH 2541, implicando o uso de bombas de amostragem eléctricas de baixo caudal e posterior processamento analítico das amostras por cromatografia gasosa. Este método, por sua vez, facultou dados para o referencial de exposição da concentração média ponderada. As estratégias de medição de cada um dos métodos e a definição dos grupos de exposição existentes neste contexto ocupacional, designadamente os Técnicos de Anatomia Patológica, os Médicos Anatomo-Patologistas e os Auxiliares, foram possíveis através da informação disponibilizada pelas técnicas de observação da actividade da análise (ergonómica) do trabalho. Estudaram-se diversas variáveis independentes, nomeadamente a temperatura ambiente e a humidade relativa, a solução de formaldeído utilizada, as condições de ventilação existentes e o número médio de peças processadas por dia em cada laboratório. Para a recolha de informação sobre estas variáveis foi preenchida, durante a permanência nos laboratórios estudados, uma Grelha de Observação e Registo. Como variáveis dependentes seleccionaram-se três indicadores de contaminação ambiental, designadamente o valor médio das concentrações superiores a 0,3 ppm em cada laboratório, a Concentração Média Ponderada obtida para cada grupo de exposição e o Índice do Tempo de Regeneração de cada laboratório. Os indicadores foram calculados e definidos através dos dados obtidos pelos dois métodos de avaliação ambiental aplicados. Baseada no delineado pela Universidade de Queensland, foi ainda aplicada uma metodologia de avaliação do risco de cancro nasofaríngeo nas 83 actividades estudadas de modo a definir níveis semi-quantitativos de estimação do risco. Para o nível de Gravidade considerou-se a informação disponível em literatura científica que define eventos biológicos adversos, relacionados com o modo de acção do agente químico e os associa com concentrações ambientais de formaldeído. Para o nível da Probabilidade utilizou-se a informação disponibilizada pela análise (ergonómica) de trabalho que permitiu conhecer a frequência de realização de cada uma das actividades estudadas. A aplicação simultânea dos dois métodos de avaliação ambiental resultou na obtenção de resultados distintos, mas não contraditórios, no que concerne à avaliação da exposição profissional a formaldeído. Para as actividades estudadas (n=83) verificou-se que cerca de 93% dos valores são superiores ao valor limite de exposição definido para a concentração máxima (VLE-CM=0,3 ppm). O “exame macroscópico” foi a actividade mais estudada e onde se verificou a maior prevalência de resultados superiores ao valor limite (92,8%). O valor médio mais elevado da concentração máxima (2,04 ppm) verificou-se no grupo de exposição dos Técnicos de Anatomia Patológica. No entanto, a maior amplitude de resultados observou-se no grupo dos Médicos Anatomo-Patologistas (0,21 ppm a 5,02 ppm). No que respeita ao referencial da Concentração Média Ponderada, todos os valores obtidos nos 10 laboratórios estudados para os três grupos de exposição foram inferiores ao valor limite de exposição definido pela Occupational Safety and Health Administration (TLV-TWA=0,75 ppm). Verificou-se associação estatisticamente significativa entre o número médio de peças processadas por laboratório e dois dos três indicadores de contaminação ambiental utilizados, designadamente o valor médio das concentrações superiores a 0,3 ppm (p=0,009) e o Índice do Tempo de Regeneração (p=0,001). Relativamente à temperatura ambiente não se observou associação estatisticamente significativa com nenhum dos indicadores de contaminação ambiental utilizados. A humidade relativa apresentou uma associação estatisticamente significativa apenas com o indicador de contaminação ambiental da Concentração Média Ponderada de dois grupos de exposição, nomeadamente com os Médicos Anatomo-Patologistas (p=0,02) e os Técnicos de Anatomia Patológica (p=0,04). A aplicação da metodologia de avaliação do risco nas 83 actividades estudadas permitiu verificar que, em cerca de dois terços (35%), o risco foi classificado como (pelo menos) elevado e, ainda, constatar que 70% dos laboratórios apresentou pelo menos 1 actividade com a classificação de risco elevado. Da aplicação dos dois métodos de avaliação ambiental e das informações obtidas para os dois referenciais de exposição pode concluir-se que o referencial mais adequado é a Concentração Máxima por estar associado ao modo de actuação do agente químico. Acresce, ainda, que um método de avaliação ambiental, como o Método 1, que permite o estudo das concentrações de formaldeído e simultaneamente a realização do registo da actividade, disponibiliza informações pertinentes para a intervenção preventiva da exposição por permitir identificar as actividades com a exposição mais elevada, bem como as variáveis que a condicionam. As peças anatómicas apresentaram-se como a principal fonte de contaminação ambiental por formaldeído neste contexto ocupacional. Aspecto de particular interesse, na medida que a actividade desenvolvida neste contexto ocupacional e, em particular na sala de entradas, é centrada no processamento das peças anatómicas. Dado não se perspectivar a curto prazo a eliminação do formaldeído, devido ao grande número de actividades que envolvem ainda a utilização da sua solução comercial (formol), pode concluir-se que a exposição a este agente neste contexto ocupacional específico é preocupante, carecendo de uma intervenção rápida com o objectivo de minimizar a exposição e prevenir os potenciais efeitos para a saúde dos trabalhadores expostos. ---------------- ABSTRACT - Exposure to formaldehyde is recognized as one of the most important risk factors present in anatomy and pathology laboratories from hospital settings. In this occupational setting, formaldehyde is used in solution, typically diluted to 10%, and is an inexpensive product. Because of that, is used in routine work in anatomy and pathology laboratories. The solution is applied as a fixative and preservative of biological material. Regarding formaldehyde health effects, local effects appear to have a more important role compared with systemic effects, due to his reactivity and rapid metabolism in skin, gastrointestinal tract and lungs cells. Likewise, lesions location correspond mainly to areas exposed to higher doses and toxic effects development depend more on external dose intensity than exposure duration. Human body formaldehyde effect more easily detectable is the irritating action, transient and reversible on eyes and upper respiratory tract (nasal and throat) membranes, which happen in general for frequent exposure to concentrations higher than 1 ppm. High doses are cytotoxic and can lead to degeneration, and also to mucous membranes and epithelia necrosis. With regard to carcinogenic effects, first assessment performed by International Agency for Research on Cancer in 1981, updated in 1982, 1987, 1995 and 2004, classified formaldehyde in Group 2A (probably carcinogenic). However, most recent evaluation in 2006, classifies formaldehyde carcinogenic (Group 1), based on evidence that exposure to this agent is likely to cause nasopharyngeal cancer in humans. This study principal objective was to characterize occupational exposure to formaldehyde in anatomy and pathology hospital laboratories, as well to describe formaldehyde environmental contamination phenomena and explore possible associations between variables. It was considered a sample of 10 hospital pathology laboratories, assessed exposure of three professional groups for comparison with two exposure metrics, and also knows ceiling concentrations in 83 activities. Were applied, simultaneously, two different environmental assessment methods: one method (Method 1) using direct reading equipment that perform measure by Photo Ionization Detection, with 11,7 eV lamps and, simultaneously, make activity description and film. This method provided data for ceiling concentrations for each activity study (TLV-C). In the other applied method (Method 2), air sampling and formaldehyde analysis were performed according to NIOSH method (2541). This method provided data average exposure concentration (TLV-TWA). Measuring and sampling strategies of each methods and exposure groups definition (Technicians, Pathologists and Assistants) was possible by information provided by activities (ergonomic) analysis. Several independent variables were studied, including temperature and relative humidity, formaldehyde solution used, ventilation conditions, and also anatomic pieces mean value processed per day in each laboratory. To register information about these variables was completed an Observation and Registration Grid. Three environmental contamination indicators were selected has dependent variables namely: mean value from concentrations exceeding 0,3 ppm in each laboratory, weighted average concentration obtained for each exposure group, as well each laboratory Time Regeneration Index. These indicators were calculated and determined through data obtained by the two environmental assessment methods. Based on Queensland University proposal, was also applied a methodology for assessing nasopharyngeal cancer risk in 83 activities studied in order to obtain risk levels (semi-quantitative estimation). For Severity level was considered available information in scientific literature that defines biological adverse events related to the chemical agent action mode, and associated with environment formaldehyde concentrations. For Probability level was used information provided by (ergonomic) work analysis that helped identifies activity frequency. Environmental assessment methods provide different results, but not contradictory, regarding formaldehyde occupational exposure evaluation. In the studied activities (n=83), about 93% of the values were above exposure limit value set for ceiling concentration in Portugal (VLE-CM = 0,3 ppm). "Macroscopic exam" was the most studied activity, and obtained the higher prevalence of results superior than 0,3 ppm (92,8%). The highest ceiling concentration mean value (2,04 ppm) was obtain in Technicians exposure group, but a result wider range was observed in Pathologists group (0,21 ppm to 5,02 ppm). Concerning Method 2, results from the three exposure groups, were all lower than limit value set by Occupational Safety and Health Administration (TLV-TWA=0,75ppm). There was a statistically significant association between anatomic pieces mean value processed by each laboratory per day, and two of the three environmental contamination indicators used, namely average concentrations exceeding 0,3 ppm (p=0,009) and Time Regeneration Index (p=0,001). Temperature was not statistically associated with any environmental contamination used indicators. Relative humidity had a statistically significant association only with one environmental contamination indicator, namely weighted average concentration, particularly with Pathologists group (p=0,02) and Technicians group (p=0,04). Risk assessment performed in the 83 studied activities showed that around two thirds (35%) were classified as (at least) high, and also noted that 70% of laboratories had at least 1 activity with high risk rating. The two environmental assessment methods application, as well information obtained from two exposure metrics, allowed to conclude that most appropriate exposure metric is ceiling concentration, because is associated with formaldehyde action mode. Moreover, an environmental method, like Method 1, which allows study formaldehyde concentrations and relates them with activity, provides relevant information for preventive information, since identifies the activity with higher exposure, as well variables that promote exposure. Anatomic pieces represent formaldehyde contamination main source in this occupational setting, and this is of particular interest because all activities are focused on anatomic pieces processing. Since there is no prospect, in short term, for formaldehyde use elimination due to large number of activities that still involve solution use, it can be concluded that exposure to this agent, in this particular occupational setting, is preoccupant, requiring an rapid intervention in order to minimize exposure and prevent potential health effects in exposed workers.
Resumo:
The use of questionnaires has been recommended for identifying, at a lower cost, individuals at risk for schistosomiasis. In this study, validity of information obtained by questionnaire in the screening for Schistosoma mansoni infection was assessed in four communities in the State of Minas Gerais, Brazil. Explanatory variables were water contact activities, sociodemographic characteristics and previous treatment for schistosomiasis. From 677, 1474, 766 and 3290 individuals eligible for stool examination in the communities, 89 to 97% participated in the study. The estimated probability of individuals to be infected, if they have all characteristics identified as independently associated with S.mansoni infection, varied from 15% in Canabrava, to 42% in Belo Horizonte, 48% in Comercinho and 80% in São José do Acácio. Our results do not support the hypothesis that a same questionnaire on risk factors could be used in screening for S.mansoni infection in different communities.
Resumo:
Anopheles albimanus is one of the main vectors of malaria in Central America and the Caribbean, based on its importance, there are previous reports of the successful colonization of this species in Latin America countries. Mosquitoes were collected in the Aragua State of Venezuela colonized in the laboratory, using a simple and efficient maintenance method. Based on life table calculations under well established laboratory conditions, the Survival Rate Probability was constant and always close to 1 in immature stages, the Reproductive Net Rate (Ro) was 3.83, the generation time (Tc) was 24.5 days and the Intrinsic Growth Rate (rm) was 0.0558. This is the first report of the colonization of A. albimanus in Venezuela.
Resumo:
2 Centre of Research, Education, Innovation and Intervention In Sport, Faculty of Sport, University of Porto, Portugal Background: Regarding children aged _10 years, only a few international studies were conducted to determine the prevalence of and risk factors for back pain. Although other studies on the older Portuguese children point to prevalence between 17% and 39%, none exists for this specific age-group. Thus, the aim of this study was conducted to establish the prevalence of and risk factors for back pain in schoolchildren aged 7–10 years. Methods: A cross-sectional survey among 637 children was conducted. A self-rating questionnaire was used to verify prevalence and duration of back pain, life habits, school absence, medical treatments or limitation of activities. For posture assessment, photographic records with a bio-photogrammetric analysis were used to obtain data about head, acromion and pelvic alignment, horizontal alignment of the scapulae, vertical alignment of the trunk and vertical body alignment. Results: Postural problems were found in 25.4% of the children, especially in the 8- and 9-year-old groups. Back pain occurs in 12.7% with the highest values among the 7- and 10-year-old children. The probability of back pain increased 7 times when the children presented a history of school absences, 4.3 times when they experienced sleeping difficulties, 4.4 times when school furniture was uncomfortable, 4.7 times if the children perceived an occurrence of parental back pain and 2.5 times when children presented incorrect posture. Conclusions: The combination of school absences, parental pain, sleeping difficulties, inappropriate school furniture and postural deviations at the sagittal and frontal planes seem to prove the multifactorial aetiology of back pain.
Resumo:
Dissertação de Mestrado Apresentada ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação do Dr. Carlos Mendes
Resumo:
Some of the properties sought in seismic design of buildings are also considered fundamental to guarantee structural robustness. Moreover, some key concepts are common to both seismic and robustness design. In fact, both analyses consider events with a very small probability of occurrence, and consequently, a significant level of damage is admissible. As very rare events,in both cases, the actions are extremely hard to quantify. The acceptance of limited damage requires a system based analysis of structures, rather than an element by element methodology, as employed for other load cases. As for robustness analysis, in seismic design the main objective is to guarantee that the structure survives an earthquake, without extensive damage. In the case of seismic design, this is achieved by guaranteeing the dissipation of energy through plastic hinges distributed in the structure. For this to be possible, some key properties must be assured, in particular ductility and redundancy. The same properties could be fundamental in robustness design, as a structure can only sustain significant damage if capable of distributing stresses to parts of the structure unaffected by the triggering event. Timber is often used for primary load‐bearing elements in single storey long‐span structures for public buildings and arenas, where severe consequences can be expected if one or more of the primary load bearing elements fail. The structural system used for these structures consists of main frames, secondary elements and bracing elements. The main frame, composed by columns and beams, can be seen as key elements in the system and should be designed with high safety against failure and under strict quality control. The main frames may sometimes be designed with moment resisting joints between columns and beams. Scenarios, where one or more of these key elements, fail should be considered at least for high consequence buildings. Two alternative strategies may be applied: isolation of collapsing sections and, provision of alternate load paths [1]. The first one is relatively straightforward to provide by deliberately designing the secondary structural system less strong and stiff. Alternatively, the secondary structural system and the bracing system can be design so that loss of capacity in the main frame does not lead to the collapse. A case study has been selected aiming to assess the consequences of these two different strategies, in particular, under seismic loads.
Resumo:
Diagnosis of the human cyclosporiasis is reported in São Paulo, SP, Brasil. Cyclospora cayetanensis has been identified in the feces of a patient by a modified Kinyoun staining method, with later sporulation in a solution of 2.5% potassium dichromate. The probability that this parasite is the eventual cause of gastrointestinal disturbances in the country was stimulated by this finding, which was arrived at by a simple technique. It had been kept in mind that the disease was expressing itself mainly among immunocompromised patients, whose number is increasing; especially in those with acquired immunodeficiency syndrome (AIDS), which is caused by the human immunodeficiency virus (HIV).
Resumo:
Third stage larvae (L3) from Angiostrongylus costaricensis were incubated in water at room temperature and at 5 ° C and their mobility was assessed daily for 17 days. Viability was associated with the mobility and position of the L3, and it was confirmed by inoculation per os in albino mice. The number of actively moving L3 sharply decreased within 3 to 4 days, but there were some infective L3 at end of observation. A mathematical model estimated 80 days as the time required to reduce the probability of infective larvae to zero. This data does not support the proposition of refrigerating vegetables and raw food as an isolated procedure for prophylaxis of human abdominal angiostrongylosis infection.
Resumo:
A correta ventilação de locais afetos a serviços técnicos elétricos, nomeadamente postos de transformação e salas de grupos geradores, reveste-se de extrema importância como garantia da continuidade e qualidade do serviço prestado, durabilidade dos materiais e equipamentos e da segurança das instalações e utilizadores. A ventilação dos locais afetos a serviços técnicos elétricos pode ser natural ou mecânica, dependendo das suas caraterísticas e das necessidades de ar para ventilação e combustão, quando aplicável. Os técnicos responsáveis pelo projeto de instalações elétricas não detém, em regra, um conhecimento muito profundo sobre este tema, sendo os seus projetos realizados com base em especificações e metodologias gerais disponibilizadas pelos fabricantes e comercializadores dos materiais e equipamentos. O projeto de uma solução de ventilação para um local afeto a serviços técnicos eléctricos exige o conhecimento de todos os ganhos térmicos no interior do espaço, o conhecimento das soluções técnicas e tecnológicas de ventilação bem como as metodologias de dimensionamento aplicáveis a cada situação. Sendo a fase de projeto elétrico, em regra, uma atividade com prazos apertados, pode conduzir ao menosprezar de certos aspetos particulares que carecem de investigação e tempo para serem desenvolvidos, o que pode resultar em projetos e mapas de quantidades que apresentam desvios da solução ideal para o cliente, podendo resultar em investimentos mais elevados, quer na fase de execução, quer na fase de exploração das instalações. Neste sentido, pretendeu-se com o presente trabalho, tratar o tema da ventilação de locais afetos a serviços técnicos, atendendo ao enquadramento normativo e regulamentar das instalações, às soluções técnicas e tecnológicas disponíveis no mercado e às metodologias de dimensionamento, apresentadas pelos documentos normativos e regulamentares. Pretendeu-se também desenvolver uma ferramenta informática de auxilio ao dimensionamento das soluções de ventilação de locais afetos a serviços técnicas eléctricos destinados a postos de transformação e grupos geradores de modo a reduzir o tempo normalmente exigido por esta tarefa, o que se traduzirá numa maior rentabilidade do tempo de projeto, assim como a normalizar as soluções apresentadas e minimizar a probabilidade de erro do dimensionamento das soluções, reduzindo assim a probabilidade de gastos em “trabalhos a mais” provenientes de erros em projeto, poupança em materiais presentes no mapa de quantidades, maior eficácia na execução da empreitada, poupança em gastos durante a exploração e desta forma numa proximidade entre as partes interessadas com o dimensionamento da ventilação do espaço técnico elétrico.