995 resultados para Treasury Single Account


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The knowledge of the anisotropic properties beneath the Iberian Peninsula and Northern Morocco has been dramatically improved since late 2007 with the analysis of the data provided by the dense TopoIberia broadband seismic network, the increasing number of permanent stations operating in Morocco, Portugal and Spain, and the contribution of smaller scale/higher resolution experiments. Results from the two first TopoIberia deployments have evidenced a spectacular rotation of the fast polarization direction (FPD) along the Gibraltar Arc, interpreted as an evidence of mantle flow deflected around the high velocity slab beneath the Alboran Sea, and a rather uniform N100 degrees E FPD beneath the central Iberian Variscan Massif, consistent with global mantle flow models taking into account contributions of surface plate motion, density variations and net lithosphere rotation. The results from the last Iberarray deployment presented here, covering the northern part of the Iberian Peninsula, also show a rather uniform FPD orientation close to N100 degrees E, thus confirming the previous interpretation globally relating the anisotropic parameters to the LPO of mantle minerals generated by mantle flow at asthenospheric depths. However, the degree of anisotropy varies significantly, from delay time values of around 0.5 s beneath NW Iberia to values reaching 2.0 sin its NE comer. The anisotropic parameters retrieved from single events providing high quality data also show significant differences for stations located in the Variscan units of NW Iberia, suggesting that the region includes multiple anisotropic layers or complex anisotropy systems. These results allow to complete the map of the anisotropic properties of the westernmost Mediterranean region, which can now be considered as one of best constrained regions worldwide, with more than 300 sites investigated over an area extending from the Bay of Biscay to the Sahara platform. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The single-lap joint is the most commonly used, although it endures significant bending due to the non-collinear load path, which negatively affects its load bearing capabilities. The use of material or geometric changes is widely documented in the literature to reduce this handicap, acting by reduction of peel and shear peak stresses or alterations of the failure mechanism emerging from local modifications. In this work, the effect of using different thickness adherends on the tensile strength of single-lap joints, bonded with a ductile and brittle adhesive, was numerically and experimentally evaluated. The joints were tested under tension for different combinations of adherend thickness. The effect of the adherends thickness mismatch on the stress distributions was also investigated by Finite Elements (FE), which explained the experimental results and the strength prediction of the joints. The numerical study was made by FE and Cohesive Zone Modelling (CZM), which allowed characterizing the entire fracture process. For this purpose, a FE analysis was performed in ABAQUS® considering geometric non-linearities. In the end, a detailed comparative evaluation of unbalanced joints, commonly used in engineering applications, is presented to give an understanding on how modifications in the bonded structures thickness can influence the joint performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bonded joints are gaining importance in many fields of manufacturing owing to a significant number of advantages to the traditional methods. The single lap joint (SLJ) is the most commonly used method. The use of material or geometric changes in SLJ reduces peel and shear peak stresses at the damage initiation sites. In this work, the effect of adherend recessing at the overlap edges on the tensile strength of SLJ, bonded with a brittle adhesive, was experimentally and numerically studied. The recess dimensions (length and depth) were optimized for different values of overlap length (LO), thus allowing the maximization of the joint’s strength by the reduction of peak stresses at the overlap edges. The effect of recessing was also investigated by a finite element (FE) analysis and cohesive zone modelling (CZM), which allowed characterizing the entire fracture process and provided joint strength predictions. For this purpose, a static FE analysis was performed in ABAQUS1 considering geometric nonlinearities. In the end, the experimental and FE results revealed the accuracy of the FE analysis in predicting the strength and also provided some design principles for the strength improvement of SLJ using a relatively simple and straightforward technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação apresentada para obtenção do grau de Doutor em Biotecnologia pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia. A presente dissertação foi preparada no âmbito do protocolo de acordo bilateral de educação avançada (ERASMUS) entre a Universidade de Vigo e a Universidade Nova de Lisboa

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A single and practical method to slain Malassezia furfur and Corynebacterium minutissimum in lesions' scales is described. The scales are collected by pressing small pieces of scotch tape (about 4 cm lenght and 2 cm width) onto the lesions and following withdrawl the furfuraceous scales will remain on the glue side. These pieces are then immersed for some minutes in lactophenol-cotton blue stain. Following absorption of the stain the scales are washed in current water to remove the excess of blue stain, dried with filter paper, dehydrated via passage in two bottles containing absolute alcohol and then placed in xylene in a centrifugation tube. The xylene dissolves the scotch tape glue and the scales fall free in the tube. After centrifugation and decantation the scales concentrated on the bottom of the tube are collected with a platinum-loop, placed in Canada balsam on a microscopy slide and closed with a cover slip. The preparations are then ready to be submitted to microscopic examination. Other stains may also be used instead of lactophenol-cotton blue. This method is simple, easily performed, and offers good conditions to study these fungi as well as being useful for the diagnosis of the diseases that they cause.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O trabalho foi desenvolvido na empresa Masterprojetos – soluções integradas, através da elaboração e desenvolvimento do projeto de estabilidade de um edifício de cinco pisos, em que um deles se encontra enterrado. Este edifício incorpora um bloco de habitação multifamiliar e um bloco de habitação unifamiliar e comércio. Tanto o edifício como a empresa estão localizados no Peso da Régua. O trabalho pode-se dividir em duas partes: Numa primeira parte estão descritas as funções desempenhadas e contextualizada a metodologia de trabalho. Nesta parte está ainda descrita a ferramenta de trabalho Cypecad, a qual não houve oportunidade de aprender a usar durante o meu percurso académico e desenvolveu-se capacidades de maneira autodidata. São ainda referidos alguns softwares disponíveis no mercado, para o mesmo ramo de atividade. Numa segunda parte encontram-se descritos os pressupostos considerados no pré-dimensionamento do edifício assim como os elementos utilizados para a formulação do modelo de cálculo. Estão ainda feitas algumas verificações analíticas para a validação do modelo tido em conta para o dimensionamento. Estabeleceu-se também uma comparação entre as soluções obtidas através do software Cypecad e através do cálculo analítico para uma laje maciça. No final estão descritas as conclusões que foram obtidas, das quais se salientam a atenção que é necessário ter, quando se utiliza softwares de cálculo automático, quanto à introdução de dados e soluções que se obtêm , assim como a poupança de tempo que este tipo de ferramenta proporciona. É de referir também neste ponto as oportunidades que o estágio me proporcionou, como a visita de alguns locais de obras, as atividades de medição e orçamento de obra e acompanhamento de projetos de outras especialidades como redes prediais, térmica, acústica e ITED (infra-estruturas de Telecomunicações em Edifícios).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Salmonella é um microrganismo responsável por grande parte das doenças alimentares, podendo por em causa a saúde pública da área contaminada. Uma deteção rápida, eficiente e altamente sensível e extremamente importante, sendo um campo em franco desenvolvimento e alvo de variados e múltiplos estudos na comunidade cientifica atual. Foi desenvolvido um método potenciométrico para a deteção de Salmonellas, com elétrodos seletivos de iões, construídos em laboratório com pontas de micropipetas, fios de prata e sensores com composição otimizada. O elétrodo indicador escolhido foi um ESI seletivo a cadmio, para redução da probabilidade de interferências no método, devido a pouca abundancia do cadmio em amostras alimentares. Elétrodos seletivos a sódio, elétrodos de Ag/AgCl de simples e de dupla juncão foram também construídos e caracterizados para serem aplicados como elétrodos de referência. Adicionalmente otimizaram-se as condições operacionais para a analise potenciométrica, nomeadamente o elétrodo de referencia utilizado, condicionamento dos elétrodos, efeito do pH e volume da solução amostra. A capacidade de realizar leituras em volumes muito pequenos com limites de deteção na ordem dos micromolares por parte dos ESI de membrana polimérica, foi integrada num ensaio com um formato nao competitivo ELISA tipo sanduiche, utilizando um anticorpo primário ligado a nanopartículas de Fe@Au, permitindo a separação dos complexos anticorpo-antigénio formados dos restantes componentes em cada etapa do ensaio, pela simples aplicação de um campo magnético. O anticorpo secundário foi marcado com nanocristais de CdS, que são bastante estáveis e é fácil a transformação em Cd2+ livre, permitindo a leitura potenciométrica. Foram testadas várias concentrações de peroxido de hidrogénio e o efeito da luz para otimizar a dissolução de CdS. O método desenvolvido permitiu traçar curvas de calibração com soluções de Salmonellas incubadas em PBS (pH 4,4) em que o limite de deteção foi de 1100 CFU/mL e de 20 CFU/mL, utilizando volumes de amostra de 10 ƒÊL e 100 ƒÊL, respetivamente para o intervalo de linearidade de 10 a 108 CFU/mL. O método foi aplicado a uma amostra de leite bovino. A taxa de recuperação media obtida foi de 93,7% } 2,8 (media } desvio padrão), tendo em conta dois ensaios de recuperação efetuados (com duas replicas cada), utilizando um volume de amostra de 100 ƒÊL e concentrações de 100 e 1000 CFU/mL de Salmonella incubada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT - Despite improvements in healthcare interventions, the incidence of adverse events and other patient safety problems constitutes a major contributor to the global burden of diseases and a concern for Public Health. In the last years there have been some successful individual and institutional efforts to approach patient safety issues in Portugal, unless such effort has been fragmented or focused on specific small areas. Long-term and global improvement has remained elusive, and most of all the improvement of patient safety in Portugal, must evaluate not only the efficacy of a change but also what was effective for implementing the change. Clearly, patient safety issues result from various combinations of individual, team, organization, system and patient factors. A systemic and integrated approach to promote patient safety must acknowledge and strive to understand the complexity of work systems and processes in health care, including the interactions between people, technology, and the environment. Safety errors cannot be productively attributed to a single human error. Our objective with this paper is to provide a brief overview of the status quo in patient safety in Portugal, highlighting key aspects that should be taken into account in the design of a strategy for improving patient safety. With these key aspects in mind, policy makers and implementers can move forward and make better decisions about which changes should be made and about the way the needed changes to improve patient safety should be implemented. The contribution of colleagues that are international leaders on healthcare quality and patient safety may also contribute to more innovative research methods needed to create the knowledge that promotes less costly successful changes.---- ---------------------- RESUMO – As questões relacionadas com a Segurança do Doente, e em particular, com a ocorrência de eventos adversos tem constituído, de há uns tempos a esta parte, uma crescente preocupação para as organizações de saúde, para os decisores políticos, para os profissionais de saúde e para os doentes/utentes e suas famílias, sendo por isso considerado um problema de Saúde Pública a que urge dar resposta. Em Portugal, nos últimos anos, têm sido desenvolvidos esforços baseados, maioritariamente, em iniciativas isoladas, para abordar os aspectos da Segurança do Doente. O facto de essas iniciativas não serem integradas numa estratégia explícita e de dimensão regional ou nacional, faz com que os resultados sejam parcelares e tenham visibilidade reduzida. Paralelamente, a melhoria da qualidade dos cuidados de saúde (a longo prazo) resultante dessas iniciativas tem sido esparsa e nem sempre a avaliação tem sido feita tendo em conta critérios de efectividade e de eficiência. A Segurança do Doente resulta da interacção de diversos factores relacionados, por um lado, com o doente e, por outro, com a prestação de cuidados que envolvem elementos de natureza individual (falhas activas) e organizacional/estrutural (falhas latentes). Devido à multifactorialidade que está na base de «problemas/falhas» na Segurança do Doente, qualquer abordagem a considerar deve ser sistémica e integrada. Simultaneamente, tais abordagens devem contemplar a compreensão da complexidade dos sistemas e dos processos de prestação de cuidados de saúde e as suas interdependências (envolvendo aspectos individuais, tecnológicos e ambientais). O presente trabalho tem por objectivo reflectir sobre o «estado da arte» da Segurança do Doente em Portugal, destacando os elementos-chave que se consideram decisivos para uma estratégia de acção nesse domínio. Com esses elementos os responsáveis pela governação da saúde poderão valorizar os aspectos que consideram decisivos para uma política de Segurança do Doente mais eficaz. A contribuição de quatro colegas internacionalmente reconhecidos como líderes na área da Qualidade em Saúde e da Segurança do Doente, constitui, por certo, uma oportunidade ímpar para a identificação e discussão de alguns dos principais desafios, ameaças e oportunidades que s

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Selenium modified ruthenium electrocatalysts supported on carbon black were synthesized using NaBH4 reduction of the metal precursor. Prepared Ru/C electrocatalysts showed high dispersion and very small averaged particle size. These Ru/C electrocatalysts were subsequently modified with Se following two procedures: (a) preformed Ru/carbon catalyst was mixed with SeO2 in xylene and reduced in H2 and (b) Ru metal precursor was mixed with SeO2 followed by reduction with NaBH4. The XRD patterns indicate that a pyrite-type structure was obtained at higher annealing temperatures, regardless of the Ru:Se molar ratio used in the preparation step. A pyrite-type structure also emerged in samples that were not calcined; however, in this case, the pyrite-type structure was only prominent for samples with higher Ru:Se ratios. The characterization of the RuSe/C electrocatalysts suggested that the Se in noncalcined samples was present mainly as an amorphous skin. Preliminary study of activity toward oxygen reduction reaction (ORR) using electrocatalysts with a Ru:Se ratio of 1:0.7 indicated that annealing after modification with Se had a detrimental effect on their activity. This result could be related to the increased particle size of crystalline RuSe2 in heat-treated samples. Higher activity of not annealed RuSe/C catalysts could also be a result of the structure containing amorphous Se skin on the Ru crystal. The electrode obtained using not calcined RuSe showed a very promising performance with a slightly lower activity and higher overpotential in comparison with a commercial Pt/C electrode. Single wall carbon nanohorns (SWNH) were considered for application as ORR electrocatalysts' supports. The characterization of SWNH was carried out regarding their tolerance toward strong catalyzed corrosion conditions. Tests indicated that SWNH have a three times higher electrochemical surface area (ESA) loss than carbon black or Pt commercial electrodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microcystin-LR (MC-LR) is a dangerous toxin found in environmental waters, quantified by high performance liquid chromatography and/or enzyme-linked immunosorbent assays. Quick, low cost and on-site analysis is thus required to ensure human safety and wide screening programs. This work proposes label-free potentiometric sensors made of solid-contact electrodes coated with a surface imprinted polymer on the surface of Multi-Walled Carbon NanoTubes (CNTs) incorporated in a polyvinyl chloride membrane. The imprinting effect was checked by using non-imprinted materials. The MC-LR sensitive sensors were evaluated, characterized and applied successfully in spiked environmental waters. The presented method offered the advantages of low cost, portability, easy operation and suitability for adaptation to flow methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Malaria regions of the Amazon basin have been characterized by difficult access and non-compliance of the patients to treatment. In an attempt to assess the schizonticide efficacy of chloroquine in a single dose of 600 mg, the authors realized a double-blind, placebo-controlled trial in 132 outpatients with vivax malaria. Patients were distributed into two groups: group CPLA, given chloroquine 600 mg (single dose) on the first day of treatment, and two doses of placebo on second and third days. Group CHLO, given chloroquine 600 mg on first day and 450 mg on second and third day. Geometric means of the parasite density during the follow-up was similar in both groups. No differences were observed in the parasitological cure between the two groups (p = 0.442). There was clinical and parasitological efficacy in treatment of patients given a single-dose of chloroquine. This suggests that its restricted use could be indicated in remote areas of Brazilian Amazon Region, nevertheless the inadequate response of three patients indicates the need for further studies.