980 resultados para Document imaging system
Resumo:
Dokumentinhallintajärjestelmien kehittyessä vuosi vuodelta voi yritykselle tulla aiheelli-seksi vaihtaa jo käytössä oleva dokumentinhallinjärjestelmä uuteen. Tällaisessa tilanteessa katsotaan uuden järjestelmän hyödyn olevan suurempi kuin dokumentinhallinjärjestelmän vaihdosta koituvat kustannukset. Myös yhtiön sisäisen rakenteen tai toimintamallien muu-tos voi aiheellistaa dokumentinhallintajärjestelmän vaihtoon. Suurimpia haasteita dokumentinhallijärjestelmästä toiseen siirtymisessä oli dokumentin mukana tulevien ominaisuuksien (attributes) muuntamisessa uuden dokumentinhallijärjes-telmän mukaisiksi. Ominaisuuksien muuttamisella optimoidaan hyötyä, jota dokumentin-hallintajärjestelmän vaihdosta saadaan. Tämä kuitenkin pidentää dokumentinhallijärjestel-mien siirtoaikaa. Jotta siirtoaikaa voitaisiin lyhentää, oli luotava muunnostaulukoita joiden mukaan teknisten dokumenttien ominaisuuksia muutetaan uuteen muotoon. Näitä muunnostaulukoita tulkit-sevat pääasiassa muunnokseen kehitetyt ohjelmat, mutta vaikeissa tapauksissa muunnok-sen joutuu hoitamaan ihminen. Siirron yhteydessä päätavoitteena on kuitenkin informaati-on säilyminen tai jopa sen kasvattaminen, vaikka se työtä hidastaisikin. Kasvavassa teollisuusyrityksessä teknisten dokumenttien versiointivauhti on kova ja koska dokumentteja versioi yleensä myös jokin ulkopuolinen taho, on dokumenttien liikenne suurta myös yrityksen dokumentinhallintajärjestelmän ulkopuolella. Dokumentit voivat liikkua joko kahden eri verkossa olevan dokumentinjärjestelmän välillä tai dokumenteilla voi olla jokin ulkoinen sijoituspaikka versioinnin aikana. Täten on yrityksen otettava huo-mioon myös dokumentinhallijärjestelmän ulkopuolinen dokumentinhallinta.
Resumo:
In recent years correlative microscopy, combining the power and advantages of different imaging system, e.g., light, electrons, X-ray, NMR, etc., has become an important tool for biomedical research. Among all the possible combinations of techniques, light and electron microscopy, have made an especially big step forward and are being implemented in more and more research labs. Electron microscopy profits from the high spatial resolution, the direct recognition of the cellular ultrastructure and identification of the organelles. It, however, has two severe limitations: the restricted field of view and the fact that no live imaging can be done. On the other hand light microscopy has the advantage of live imaging, following a fluorescently tagged molecule in real time and at lower magnifications the large field of view facilitates the identification and location of sparse individual cells in a large context, e.g., tissue. The combination of these two imaging techniques appears to be a valuable approach to dissect biological events at a submicrometer level. Light microscopy can be used to follow a labelled protein of interest, or a visible organelle such as mitochondria, in time, then the sample is fixed and the exactly same region is investigated by electron microscopy. The time resolution is dependent on the speed of penetration and fixation when chemical fixatives are used and on the reaction time of the operator for cryo-fixation. Light microscopy can also be used to identify cells of interest, e.g., a special cell type in tissue or cells that have been modified by either transfections or RNAi, in a large population of non-modified cells. A further application is to find fluorescence labels in cells on a large section to reduce searching time in the electron microscope. Multiple fluorescence labelling of a series of sections can be correlated with the ultrastructure of the individual sections to get 3D information of the distribution of the marked proteins: array tomography. More and more efforts are put in either converting a fluorescence label into an electron dense product or preserving the fluorescence throughout preparation for the electron microscopy. Here, we will review successful protocols and where possible try to extract common features to better understand the importance of the individual steps in the preparation. Further the new instruments and software, intended to ease correlative light and electron microscopy, are discussed. Last but not least we will detail the approach we have chosen for correlative microscopy.
Resumo:
Tämän diplomityön tarkoituksena oli tutkia ja etsiä kehityskohteita kohdeyrityksen kehitysinvestointiprojektien toimintamallissa. Diplomityön tavoitteena oli tutkia nykyistä investointikäytäntöä ja luoda eri linjastoissa tehtäville kehitysinvestointiesityksille edellytykset tukea yhteisesti yrityksen tulevia suhdannenäkymiä ja strategiaa. Työn tavoitteena oli myös pyrkiä välillisesti kasvattamaan investointiin sijoitettavan pääoman tuottoprosenttia pienentämällä kustannuksia huolellisen valmistelun keinoin jo investointiprojektin kehityksen alkuvaiheessa. Työkaluksi tähän todettiin olevan läpivientimalli. Tutkimusmenetelminä käytettiin kvantitatiivisia ja kvalitatiivisia tiedonkeruumenenelmiä. Näistä ylemmän johdon ja keskijohdon haastattelut, sekä omat kokemukset yrityksen käytännöistä vuosilta 2004-2008 näyttelivät pääroolia. Työn tuotoksena laadittiin aiheeseen liittyvä ohjeistus ja läpivientimalli osaksi yrityksen laatujärjestelmää. Malli sijoitettiin template tyyppisesti dokumentinhallinnointiohjelmistoon. Työn tuloksena on esitetty kehitysinvestointiprojektien läpivientimalli siten, että se tukee yrityksen strategiaa kestävän kehityksen ympäröimänä. Työlle asetetut tavoitteet saavutettiin sille tarkoitetussa aikavälissä.
Resumo:
The effects of pulp processing on softwood fiber properties strongly influence the properties of wet and dry paper webs. Pulp strength delivery studies have provided observations that much of the strength potential of long fibered pulp is lost during brown stock fiber line operations where the pulp is merely washed and transferred to the subsequent processing stages. The objective of this work was to study the intrinsic mechanisms which maycause fiber damage in the different unit operations of modern softwood brown stock processing. The work was conducted by studying the effects of industrial machinery on pulp properties with some actions of unit operations simulated in laboratory scale devices under controlled conditions. An optical imaging system was created and used to study the orientation of fibers in the internal flows during pulp fluidization in mixers and the passage of fibers through the screen openings during screening. The qualitative changes in fibers were evaluated with existing and standardized techniques. The results showed that each process stage has its characteristic effects on fiber properties: Pulp washing and mat formation in displacement washers introduced fiber deformations especially if the fibers entering the stage were intact, but it did not decrease the pulp strength properties. However, storage chests and pulp transfer after displacement washers contributed to strength deterioration. Pulp screening proved to be quite gentle, having the potential of slightly evening out fiber deformations from very deformed pulps and vice versa inflicting a marginal increase in the deformation indices if the fibers were previously intact. Pulp mixing in fluidizing industrial mixers did not have detrimental effects on pulp strength and had the potential of slightly evening out the deformations, provided that the intensity of fluidization was high enough to allow fiber orientation with the flow and that the time of mixing was short. The chemical and mechanical actions of oxygen delignification had two distinct effects on pulp properties: chemical treatment clearly reduced pulp strength with and without mechanical treatment, and the mechanical actions of process machinery introduced more conformability to pulp fibers, but did not clearly contribute to a further decrease in pulp strength. The chemical composition of fibers entering the oxygen stage was also found to affect the susceptibility of fibers to damage during oxygen delignification. Fibers with the smallest content of xylan were found to be more prone to irreversibledeformations accompanied with a lower tensile strength of the pulp. Fibers poor in glucomannan exhibited a lower fiber strength while wet after oxygen delignification as compared to the reference pulp. Pulps with the smallest lignin content on the other hand exhibited improved strength properties as compared to the references.
Resumo:
It is known already from 1970´s that laser beam is suitable for processing paper materials. In this thesis, term paper materials mean all wood-fibre based materials, like dried pulp, copy paper, newspaper, cardboard, corrugated board, tissue paper etc. Accordingly, laser processing in this thesis means all laser treatments resulting material removal, like cutting, partial cutting, marking, creasing, perforation etc. that can be used to process paper materials. Laser technology provides many advantages for processing of paper materials: non-contact method, freedom of processing geometry, reliable technology for non-stop production etc. Especially packaging industry is very promising area for laser processing applications. However, there are only few industrial laser processing applications worldwide even in beginning of 2010´s. One reason for small-scale use of lasers in paper material manufacturing is that there is a shortage of published research and scientific articles. Another problem, restraining the use of laser for processing of paper materials, is colouration of paper material i.e. the yellowish and/or greyish colour of cut edge appearing during cutting or after cutting. These are the main reasons for selecting the topic of this thesis to concern characterization of interaction of laser beam and paper materials. This study was carried out in Laboratory of Laser Processing at Lappeenranta University of Technology (Finland). Laser equipment used in this study was TRUMPF TLF 2700 carbon dioxide laser that produces a beam with wavelength of 10.6 μm with power range of 190-2500 W (laser power on work piece). Study of laser beam and paper material interaction was carried out by treating dried kraft pulp (grammage of 67 g m-2) with different laser power levels, focal plane postion settings and interaction times. Interaction between laser beam and dried kraft pulp was detected with different monitoring devices, i.e. spectrometer, pyrometer and active illumination imaging system. This way it was possible to create an input and output parameter diagram and to study the effects of input and output parameters in this thesis. When interaction phenomena are understood also process development can be carried out and even new innovations developed. Fulfilling the lack of information on interaction phenomena can assist in the way of lasers for wider use of technology in paper making and converting industry. It was concluded in this thesis that interaction of laser beam and paper material has two mechanisms that are dependent on focal plane position range. Assumed interaction mechanism B appears in range of average focal plane position of 3.4 mm and 2.4 mm and assumed interaction mechanism A in range of average focal plane position of 0.4 mm and -0.6 mm both in used experimental set up. Focal plane position 1.4 mm represents midzone of these two mechanisms. Holes during laser beam and paper material interaction are formed gradually: first small hole is formed to interaction area in the centre of laser beam cross-section and after that, as function of interaction time, hole expands, until interaction between laser beam and dried kraft pulp is ended. By the image analysis it can be seen that in beginning of laser beam and dried kraft pulp material interaction small holes off very good quality are formed. It is obvious that black colour and heat affected zone appear as function of interaction time. This reveals that there still are different interaction phases within interaction mechanisms A and B. These interaction phases appear as function of time and also as function of peak intensity of laser beam. Limit peak intensity is the value that divides interaction mechanism A and B from one-phase interaction into dual-phase interaction. So all peak intensity values under limit peak intensity belong to MAOM (interaction mechanism A one-phase mode) or to MBOM (interaction mechanism B onephase mode) and values over that belong to MADM (interaction mechanism A dual-phase mode) or to MBDM (interaction mechanism B dual-phase mode). Decomposition process of cellulose is evolution of hydrocarbons when temperature is between 380- 500°C. This means that long cellulose molecule is split into smaller volatile hydrocarbons in this temperature range. As temperature increases, decomposition process of cellulose molecule changes. In range of 700-900°C, cellulose molecule is mainly decomposed into H2 gas; this is why this range is called evolution of hydrogen. Interaction in this range starts (as in range of MAOM and MBOM), when a small good quality hole is formed. This is due to “direct evaporation” of pulp via decomposition process of evolution of hydrogen. And this can be seen can be seen in spectrometer as high intensity peak of yellow light (in range of 588-589 nm) which refers to temperature of ~1750ºC. Pyrometer does not detect this high intensity peak since it is not able to detect physical phase change from solid kraft pulp to gaseous compounds. As interaction time between laser beam and dried kraft pulp continues, hypothesis is that three auto ignition processes occurs. Auto ignition of substance is the lowest temperature in which it will spontaneously ignite in a normal atmosphere without an external source of ignition, such as a flame or spark. Three auto ignition processes appears in range of MADM and MBDM, namely: 1. temperature of auto ignition of hydrogen atom (H2) is 500ºC, 2. temperature of auto ignition of carbon monoxide molecule (CO) is 609ºC and 3. temperature of auto ignition of carbon atom (C) is 700ºC. These three auto ignition processes leads to formation of plasma plume which has strong emission of radiation in range of visible light. Formation of this plasma plume can be seen as increase of intensity in wavelength range of ~475-652 nm. Pyrometer shows maximum temperature just after this ignition. This plasma plume is assumed to scatter laser beam so that it interacts with larger area of dried kraft pulp than what is actual area of beam cross-section. This assumed scattering reduces also peak intensity. So result shows that assumably scattered light with low peak intensity is interacting with large area of hole edges and due to low peak intensity this interaction happens in low temperature. So interaction between laser beam and dried kraft pulp turns from evolution of hydrogen to evolution of hydrocarbons. This leads to black colour of hole edges.
Resumo:
Endometriosis is a complex and multifactorial disease. Chromosomal imbalance screening in endometriotic tissue can be used to detect hot-spot regions in the search for a possible genetic marker for endometriosis. The objective of the present study was to detect chromosomal imbalances by comparative genomic hybridization (CGH) in ectopic tissue samples from ovarian endometriomas and eutopic tissue from the same patients. We evaluated 10 ovarian endometriotic tissues and 10 eutopic endometrial tissues by metaphase CGH. CGH was prepared with normal and test DNA enzymatically digested, ligated to adaptors and amplified by PCR. A second PCR was performed for DNA labeling. Equal amounts of both normal and test-labeled DNA were hybridized in human normal metaphases. The Isis FISH Imaging System V 5.0 software was used for chromosome analysis. In both eutopic and ectopic groups, 4/10 samples presented chromosomal alterations, mainly chromosomal gains. CGH identified 11q12.3-q13.1, 17p11.1-p12, 17q25.3-qter, and 19p as critical regions. Genomic imbalances in 11q, 17p, 17q, and 19p were detected in normal eutopic and/or ectopic endometrium from women with ovarian endometriosis. These regions contain genes such as POLR2G, MXRA7 and UBA52 involved in biological processes that may lead to the establishment and maintenance of endometriotic implants. This genomic imbalance may affect genes in which dysregulation impacts both eutopic and ectopic endometrium.
Resumo:
We investigated whether Ca2+/calmodulin-dependent kinase II (CaMKII) and calcineurin (CaN) are involved in myocardial hypertrophy induced by tumor necrosis factor α (TNF-α). The cardiomyocytes of neonatal Wistar rats (1-2 days old) were cultured and stimulated by TNF-α (100 μg/L), and Ca2+ signal transduction was blocked by several antagonists, including BAPTA (4 µM), KN-93 (0.2 µM) and cyclosporin A (CsA, 0.2 µM). Protein content, protein synthesis, cardiomyocyte volumes, [Ca2+]i transients, CaMKIIδB and CaN were evaluated by the Lowry method, [³H]-leucine incorporation, a computerized image analysis system, a Till imaging system, and Western blot analysis, respectively. TNF-α induced a significant increase in protein content in a dose-dependent manner from 10 µg/L (53.56 µg protein/well) to 100 μg/L (72.18 µg protein/well), and in a time-dependent manner from 12 h (37.42 µg protein/well) to 72 h (42.81 µg protein/well). TNF-α (100 μg/L) significantly increased the amplitude of spontaneous [Ca2+]i transients, the total protein content, cell size, and [³H]-leucine incorporation in cultured cardiomyocytes, which was abolished by 4 µM BAPTA, an intracellular Ca2+ chelator. The increases in protein content, cell size and [³H]-leucine incorporation were abolished by 0.2 µM KN-93 or 0.2 µM CsA. TNF-α increased the expression of CaMKIIδB by 35.21% and that of CaN by 22.22% compared to control. These effects were abolished by 4 µM BAPTA, which itself had no effect. These results suggest that TNF-α induces increases in [Ca2+]i, CaMKIIδB and CaN and promotes cardiac hypertrophy. Therefore, we hypothesize that the Ca2+/CaMKII- and CaN-dependent signaling pathways are involved in myocardial hypertrophy induced by TNF-α.
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Resumo:
Les effets bénéfiques des lipoprotéines de haute densité (HDL) contre l'athérosclérose ont été attribués, en grande partie, à leur composante protéique majeure, l'apolipoprotéine A-I (apoA-I). Cependant, il y a des indications que l'apoA-I peut être dégradée par des protéases localisées dans les plaques athérosclérotiques humaines, ce qui pourrait réduire l'efficacité des thérapies basées sur les HDL sous certaines conditions. Nous décrivons ici le développement et l'utilisation d'une nouvelle sonde bioactivatable fluorescente dans le proche infrarouge, apoA-I-Cy5.5, pour l'évaluation des activités protéolytiques spécifiques qui dégradent l'apoA-I in vitro, in vivo et ex vivo. La fluorescence basale de la sonde est inhibée par la saturation du fluorophore Cy5.5 sur la protéine apoA-I, et la fluorescence émise par le Cy5.5 (proche infrarouge) est révélée après clivage de la sonde. La protéolyse in vitro de l'apoA-I par des protéases a montré une augmentation de la fluorescence allant jusqu'à 11 fois (n=5, P ≤ 0.05). En utilisant notre nouvelle sonde apoA-I-Cy5.5 nous avons pu quantifier les activités protéolytiques d'une grande variété de protéases, incluant des sérines (chymase), des cystéines (cathepsine S), et des métalloprotéases (MMP-12). En outre, nous avons pu détecter l'activation de la sonde apoA-I-Cy5.5 sur des sections d'aorte de souris athérosclérotiques par zymographie in situ et avons observé qu'en présence d'inhibiteurs de protéases à large spectre, la sonde pourrait être protégée des activités protéolytiques des protéases (-54%, n=6, P ≤ 0,001). L'infusion in vivo de la sonde apoA-I-Cy5.5 dans les souris athérosclérotiques (Ldlr -/--Tg (apoB humaine)) a résulté en utilisant un système d'imagerie moléculaire combinant la fluorescence moléculaire tomographique et la résonance magnétique,en un signal de fluorescence dans l'aorte plus important que celui dans les aortes des souris de type sauvage C57Bl/6J (CTL). Les mesures in vivo ont été confirmées par l'imagerie ex vivo de l'aorte qui a indiqué une augmentation de 5 fois du signal fluorescent dans l'aorte des souris ATX (n=5) par rapport à l'aorte des souris (n=3) CTL (P ≤ 0,05). L'utilisation de cette sonde pourrait conduire à une meilleure compréhension des mécanismes moléculaires qui sous-tendent le développement et la progression de l'athérosclérose et l'amélioration des stratégies thérapeutiques à base de HDL.
Resumo:
The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.
Resumo:
The resolution of remotely sensed data is becoming increasingly fine, and there are now many sources of data with a pixel size of 1 m x 1 m. This produces huge amounts of data that have to be stored, processed and transmitted. For environmental applications this resolution possibly provides far more data than are needed: data overload. This poses the question: how much is too much? We have explored two resolutions of data-20 in pixel SPOT data and I in pixel Computerized Airborne Multispectral Imaging System (CAMIS) data from Fort A. P. Hill (Virginia, USA), using the variogram of geostatistics. For both we used the normalized difference vegetation index (NDVI). Three scales of spatial variation were identified in both the SPOT and 1 in data: there was some overlap at the intermediate spatial scales of about 150 in and of 500 m-600 in. We subsampled the I in data and scales of variation of about 30 in and of 300 in were identified consistently until the separation between pixel centroids was 15 in (or 1 in 225pixels). At this stage, spatial scales of about 100m and 600m were described, which suggested that only now was there a real difference in the amount of spatial information available from an environmental perspective. These latter were similar spatial scales to those identified from the SPOT image. We have also analysed I in CAMIS data from Fort Story (Virginia, USA) for comparison and the outcome is similar.:From these analyses it seems that a pixel size of 20m is adequate for many environmental applications, and that if more detail is required the higher resolution data could be sub-sampled to a 10m separation between pixel centroids without any serious loss of information. This reduces significantly the amount of data that needs to be stored, transmitted and analysed and has important implications for data compression.
Resumo:
The goal was to quantitatively estimate and compare the fidelity of images acquired with a digital imaging system (ADAR 5500) and generated through scanning of color infrared aerial photographs (SCIRAP) using image-based metrics. Images were collected nearly simultaneously in two repetitive flights to generate multi-temporal datasets. Spatial fidelity of ADAR was lower than that of SCIRAP images. Radiometric noise was higher for SCIRAP than for ADAR images, even though noise from misregistration effects was lower. These results suggest that with careful control of film scanning, the overall fidelity of SCIRAP imagery can be comparable to that of digital multispectral camera data. Therefore, SCIRAP images can likely be used in conjunction with digital metric camera imagery in long-term landcover change analyses.
Observations of the eruption of the Sarychev volcano and simulations using the HadGEM2 climate model
Resumo:
In June 2009 the Sarychev volcano located in the Kuril Islands to the northeast of Japan erupted explosively, injecting ash and an estimated 1.2 ± 0.2 Tg of sulfur dioxide into the upper troposphere and lower stratosphere, making it arguably one of the 10 largest stratospheric injections in the last 50 years. During the period immediately after the eruption, we show that the sulfur dioxide (SO2) cloud was clearly detected by retrievals developed for the Infrared Atmospheric Sounding Interferometer (IASI) satellite instrument and that the resultant stratospheric sulfate aerosol was detected by the Optical Spectrograph and Infrared Imaging System (OSIRIS) limb sounder and CALIPSO lidar. Additional surface‐based instrumentation allows assessment of the impact of the eruption on the stratospheric aerosol optical depth. We use a nudged version of the HadGEM2 climate model to investigate how well this state‐of‐the‐science climate model can replicate the distributions of SO2 and sulfate aerosol. The model simulations and OSIRIS measurements suggest that in the Northern Hemisphere the stratospheric aerosol optical depth was enhanced by around a factor of 3 (0.01 at 550 nm), with resultant impacts upon the radiation budget. The simulations indicate that, in the Northern Hemisphere for July 2009, the magnitude of the mean radiative impact from the volcanic aerosols is more than 60% of the direct radiative forcing of all anthropogenic aerosols put together. While the cooling induced by the eruption will likely not be detectable in the observational record, the combination of modeling and measurements would provide an ideal framework for simulating future larger volcanic eruptions.
Resumo:
Introdução: A cirurgia de revascularização do miocárdio em pacientes com disfunção ventricular esquerda grave, criteriosamente selecionados, pode levar a um incremento na fração de ejeção e/ou melhora da classe functional da New York Heart Association (NYHA) de insuficiência cardíaca. Neste estudo, buscamos variáveis histopatológicas que pudessem estar associadas com a melhora da fração de ejeção ventricular esquerda e/ou melhora na classe funcional de insuficiência cardíaca seis meses após a cirurgia. Métodos: Vinte e quatro pacientes com indicação de cirurgia de revascularização do miocárdio, fração de ejeção ventricular esquerda < 35%, classe funcional de insuficiência cardíaca variando de NYHA II a IV e idade média de 59±9 anos, foram selecionados. Foram realizadas biópsias endomiocárdicas no transoperatório e repetidas seis meses depois através de punção venosa. Extensão de fibrose (% da área do miocárdio do espécime avaliado), miocitólise (número de células encontradas com miocitólise por campo) e hipertrofia da fibra miocárdica (medida através do menor diâmetro celular) foram quantificados utilizando um sistema analizador de imagem (Leica - Image Analysis System). As medidas de fração de ejeção, por ventriculografia radioisotópica, e avaliação da classe funcional de insuficiência cardíaca (NYHA), também foram repetidas após seis meses. Resultados: Dos 24 pacientes inicialmente selecionados, sete foram a óbito antes dos seis meses e um recusou-se a repetir a segunda biópsia. Houve uma melhora significativa na classe funcional NYHA de insuficiência cardíaca nos sobreviventes seis meses após a cirurgia (2,8±0,7 vs. 1,7±0,6; p<0,001), enquanto que a fração de ejeção ventricular esquerda não se alterou (25±6% vs. 26±10%; p = NS). O grau de hipertrofia da fibra muscular permaneceu estável entre o pré e o pós operatório (21 ± 4 vs.22 ± 4μm), porém a extensão de fibrose (8±8 vs. 21±15% de área) e a quantidade de células apresentando miocitólise (9±11 vs. 21±15%/células) aumentaram. significativamente. Uma composição de escore histológico, combinando as três variáveis histopatológicas, indicando um menor grau de remodelamento no pré operatório, identificou um subgrupo de pacientes que apresentaram um incremento na fração de ejeção ventricular esquerda após a cirurgia de revascularização do miocárdio. Conclusão: Em pacientes portadores de cardiopatia isquêmica e grave disfunção ventricular esquerda, a cirurgia de revascularização do miocárdio foi associada com um incremento na função ventricular em um subgrupo de pacientes que apresentavam, no pré operatório, um menor grau de remodelamento ventricular adverso, estimado pela composição de um escore histológico. Apesar da melhora na classe funcional de insuficiência cardíaca na maioria dos pacientes, e incremento na fração de ejeção ventricular esquerda em um subgrupo, alterações histológicas favoráveis, indicativos de reversão do remodelamento ventricular esquerdo, não devem ser esperados após a revascularização, ao menos num período de seis meses após a cirurgia.
Resumo:
O trabalho a seguir objetivou o desenvolvimento de um meta-modelo de análise para estudo do fenômeno da resistência durante a implementação de um sistema de gerenciamento eletrônico de documentos (GED). O estudo foi conduzido por meio de uma abordagem quantitativa e explanatória e buscou elencar os principais expoentes da literatura acadêmica no tema da resistência. A partir de suas concepções, os fatores mais relevantes no campo da resistência contextualizada a um sistema de gerenciamento eletrônico de documentos foram evidenciados. O meta-modelo gerado, o qual serviu de base para a análise estatística, identificou os seguintes antecedentes ao comportamento de resistência: características pessoais, sistemas e interação. Este, por sua vez, divide-se na interação poder e política e na interação sócio-técnica. A partir da identificação desses vetores e montagem do meta-modelo, foi elaborado um questionário de investigação, o qual foi distribuído via Internet. Foram colhidas 133 respostas de usuários que tivessem vivenciado pelo menos uma experiência de implantação de um sistema de gerenciamento eletrônico de documentos. Os dados foram então submetidos a tratamento estatístico na ferramenta SPSS por meio de análise fatorial e regressão linear múltipla. Os resultados obtidos permitiram identificar os fatores de maior influência no comportamento de resistência e confirmar/refutar as hipóteses originalmente propostas. O meta-modelo gerado também promoveu a discussão dos resultados com base na teoria utilizada, gerando novos insights para o entendimento do comportamento de resistência no contexto do GED.