920 resultados para Processing and sinterization
Resumo:
Organic waste is a rich substrate for microbial growth, and because of that, workers from waste industry are at higher risk of exposure to bioaerosols. This study aimed to assess fungal contamination in two plants handling solid waste management. Air samples from the two plants were collected through an impaction method. Surface samples were also collected by swabbing surfaces of the same indoor sites. All collected samples were incubated at 27◦C for 5 to 7 d. After lab processing and incubation of collected samples, quantitative and qualitative results were obtained with identification of the isolated fungal species. Air samples were also subjected to molecular methods by real-time polymerase chain reaction (RT PCR) using an impinger method to measure DNA of Aspergillus flavus complex and Stachybotrys chartarum. Assessment of particulate matter (PM) was also conducted with portable direct-reading equipment. Particles concentration measurement was performed at five different sizes (PM0.5; PM1; PM2.5; PM5; PM10). With respect to the waste sorting plant, three species more frequently isolated in air and surfaces were A. niger (73.9%; 66.1%), A. fumigatus (16%; 13.8%), and A. flavus (8.7%; 14.2%). In the incineration plant, the most prevalent species detected in air samples were Penicillium sp. (62.9%), A. fumigatus (18%), and A. flavus (6%), while the most frequently isolated in surface samples were Penicillium sp. (57.5%), A. fumigatus (22.3%) and A. niger (12.8%). Stachybotrys chartarum and other toxinogenic strains from A. flavus complex were not detected. The most common PM sizes obtained were the PM10 and PM5 (inhalable fraction). Since waste is the main internal fungal source in the analyzed settings, preventive and protective measures need to be maintained to avoid worker exposure to fungi and their metabolites.
Resumo:
Objectives - Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Methods - Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Results - Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. Conclusions - The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources.
Resumo:
Background: Temporal lobe epilepsy (TLE) is a neurological disorder that directly affects cortical areas responsible for auditory processing. The resulting abnormalities can be assessed using event-related potentials (ERP), which have high temporal resolution. However, little is known about TLE in terms of dysfunction of early sensory memory encoding or possible correlations between EEGs, linguistic deficits, and seizures. Mismatch negativity (MMN) is an ERP component – elicited by introducing a deviant stimulus while the subject is attending to a repetitive behavioural task – which reflects pre-attentive sensory memory function and reflects neuronal auditory discrimination and perceptional accuracy. Hypothesis: We propose an MMN protocol for future clinical application and research based on the hypothesis that children with TLE may have abnormal MMN for speech and non-speech stimuli. The MMN can be elicited with a passive auditory oddball paradigm, and the abnormalities might be associated with the location and frequency of epileptic seizures. Significance: The suggested protocol might contribute to a better understanding of the neuropsychophysiological basis of MMN. We suggest that in TLE central sound representation may be decreased for speech and non-speech stimuli. Discussion: MMN arises from a difference to speech and non-speech stimuli across electrode sites. TLE in childhood might be a good model for studying topographic and functional auditory processing and its neurodevelopment, pointing to MMN as a possible clinical tool for prognosis, evaluation, follow-up, and rehabilitation for TLE.
Resumo:
Phenolic compounds constitute a diverse group of secondary metabolites which are present in both grapes and wine. The phenolic content and composition of grape processed products (wine) are greatly influenced by the technological practice to which grapes are exposed. During the handling and maturation of the grapes several chemical changes may occur with the appearance of new compounds and/or disappearance of others, and consequent modification of the characteristic ratios of the total phenolic content as well as of their qualitative and quantitative profile. The non-volatile phenolic qualitative composition of grapes and wines, the biosynthetic relationships between these compounds, and the most relevant chemical changes occurring during processing and storage will be highlighted in this review.
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Optimization of fMRI Processing Parameters for Simutaneous Acquisition of EEG/fMRI in Focal Epilepsy
Resumo:
In the context of focal epilepsy, the simultaneous combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) holds a great promise as a technique by which the hemodynamic correlates of interictal spikes detected on scalp EEG can be identified. The fact that traditional EEG recordings have not been able to overcome the difficulty in correlating the ictal clinical symptoms to the onset in particular areas of the lobes, brings the need of mapping with more precision the epileptogenic cortical regions. On the other hand, fMRI suggested localizations more consistent with the ictal clinical manifestations detected. This study was developed in order to improve the knowledge about the way parameters involved in the physical and mathematical data, produced by the EEG/fMRI technique processing, would influence the final results. The evaluation of the accuracy was made by comparing the BOLD results with: the high resolution EEG maps; the malformative lesions detected in the T1 weighted MR images; and the anatomical localizations of the diagnosed symptomatology of each studied patient. The optimization of the set of parameters used, will provide an important contribution to the diagnosis of epileptogenic focuses, in patients included on an epilepsy surgery evaluation program. The results obtained allowed us to conclude that: by associating the BOLD effect with interictal spikes, the epileptogenic areas are mapped to localizations different from those obtained by the EEG maps representing the electrical potential distribution across the scalp (EEG); there is an important and solid bond between the variation of particular parameters (manipulated during the fMRI data processing) and the optimization of the final results, from which smoothing, deleted volumes, HRF (used to convolve with the activation design), and the shape of the Gamma function can be certainly emphasized.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Due to the importance and wide applications of the DNA analysis, there is a need to make genetic analysis more available and more affordable. As such, the aim of this PhD thesis is to optimize a colorimetric DNA biosensor based on gold nanoprobes developed in CEMOP by reducing its price and the needed volume of solution without compromising the device sensitivity and reliability, towards the point of care use. Firstly, the price of the biosensor was decreased by replacing the silicon photodetector by a low cost, solution processed TiO2 photodetector. To further reduce the photodetector price, a novel fabrication method was developed: a cost-effective inkjet printing technology that enabled to increase TiO2 surface area. Secondly, the DNA biosensor was optimized by means of microfluidics that offer advantages of miniaturization, much lower sample/reagents consumption, enhanced system performance and functionality by integrating different components. In the developed microfluidic platform, the optical path length was extended by detecting along the channel and the light was transmitted by optical fibres enabling to guide the light very close to the analysed solution. Microfluidic chip of high aspect ratio (~13), smooth and nearly vertical sidewalls was fabricated in PDMS using a SU-8 mould for patterning. The platform coupled to the gold nanoprobe assay enabled detection of Mycobacterium tuberculosis using 3 8l on DNA solution, i.e. 20 times less than in the previous state-of-the-art. Subsequently, the bio-microfluidic platform was optimized in terms of cost, electrical signal processing and sensitivity to colour variation, yielding 160% improvement of colorimetric AuNPs analysis. Planar microlenses were incorporated to converge light into the sample and then to the output fibre core increasing 6 times the signal-to-losses ratio. The optimized platform enabled detection of single nucleotide polymorphism related with obesity risk (FTO) using target DNA concentration below the limit of detection of the conventionally used microplate reader (i.e. 15 ng/μl) with 10 times lower solution volume (3 μl). The combination of the unique optical properties of gold nanoprobes with microfluidic platform resulted in sensitive and accurate sensor for single nucleotide polymorphism detection operating using small volumes of solutions and without the need for substrate functionalization or sophisticated instrumentation. Simultaneously, to enable on chip reagents mixing, a PDMS micromixer was developed and optimized for the highest efficiency, low pressure drop and short mixing length. The optimized device shows 80% of mixing efficiency at Re = 0.1 in 2.5 mm long mixer with the pressure drop of 6 Pa, satisfying requirements for the application in the microfluidic platform for DNA analysis.
Resumo:
The thrust towards energy conservation and reduced environmental footprint has fueled intensive research for alternative low cost sources of renewable energy. Organic photovoltaic cells (OPVs), with their low fabrication costs, easy processing and flexibility, represent a possible viable alternative. Perylene diimides (PDIs) are promising electron-acceptor candidates for bulk heterojunction (BHJ) OPVs, as they combine higher absorption and stability with tunable material properties, such as solubility and position of the lowest unoccupied molecular orbital (LUMO) level. A prerequisite for trap free electron transport is for the LUMO to be located at a level deeper than 3.7 eV since electron trapping in organic semiconductors is universal and dominated by a trap level located at 3.6 eV. Although the mostly used fullerene acceptors in polymer:fullerene solar cells feature trap-free electron transport, low optical absorption of fullerene derivatives limits maximum attainable efficiency. In this thesis, we try to get a better understanding of the electronic properties of PDIs, with a focus on charge carrier transport characteristics and the effect of different processing conditions such as annealing temperature and top contact (cathode) material. We report on a commercially available PDI and three PDI derivatives as acceptor materials, and its blends with MEH-PPV (Poly[2-methoxy 5-(2-ethylhexyloxy)-1,4-phenylenevinylene]) and P3HT (Poly(3-hexylthiophene-2,5-diyl)) donor materials in single carrier devices (electron-only and hole-only) and in solar cells. Space-charge limited current measurements and modelling of temperature dependent J-V characteristics confirmed that the electron transport is essentially trap-free in such materials. Different blend ratios of P3HT:PDI-1 (1:1) and (1:3) show increase in the device performance with increasing PDI-1 ratio. Furthermore, thermal annealing of the devices have a significant effect in the solar cells that decreases open-circuit voltage (Voc) and fill factor FF, but increases short-circuit current (Jsc) and overall device performance. Morphological studies show that over-aggregation in traditional donor:PDI blend systems is still a big problem, which hinders charge carrier transport and performance in solar cells.
Resumo:
The immune system comprises of different cell types whose role is to protect us against pathogens. This thesis investigates a very important mechanism for our organism protection in a specific disorder: cross-presentation in Wiskott-Aldrich Syndrome (WAS). WAS is caused by loss-of-function mutations in the cytoskeletal regulator WASp and WAS patients suffer from eczema, thrombocytopenia, and immunodeficiency. X-linked neutropenia (XLN) is caused by gain-of-function mutations in WASp and XLN patients suffer from severe congenital neutropenia and immunodeficiency. This thesis was focused on the role of B and T lymphocytes and dendritic cells (DCs). This work will be divided into two main topics: 1) In the first part I studied the capacity of B cells to take up, degrade and present antigen. Moreover I studied the capacity of B cells to induce T cell proliferation. 2) In the second part, I studied T cell proliferation induced by dendritic cells. To increase our understanding about this mechanism, additional experiments were performed, including acidification capacity of CD8+ and CD8- DCs, reactive oxygen species (ROS) production since it is directly connected to acidification. These assays were measured by flow cytometry. Localization of Rac1 and Rac2 GTPases was assessed by confocal microscopy. Proliferation, acidification and ROS production assays were performed also with cells from X-linked neutropenia (XLN) mice. From this study we concluded that B cells cannot induce CD8+ T cell proliferation however they take up and present antigen. Moreover I have shown that increased cross-presentation by WASp KO DCs with ovalbumin is associated with decreased capacity to acidify endosomal compartment; and WASp KO CD8- DCs have increased Rac2 localization to the phagosome. XLN dendritic cells have similar acidification and ROS production capacity than wildtype cells. In conclusion, our data suggests that WASp regulates antigen processing and presentation in DCs.
Resumo:
During the last decade Mongolia’s region was characterized by a rapid increase of both severity and frequency of drought events, leading to pasture reduction. Drought monitoring and assessment plays an important role in the region’s early warning systems as a way to mitigate the negative impacts in social, economic and environmental sectors. Nowadays it is possible to access information related to the hydrologic cycle through remote sensing, which provides a continuous monitoring of variables over very large areas where the weather stations are sparse. The present thesis aimed to explore the possibility of using NDVI as a potential drought indicator by studying anomaly patterns and correlations with other two climate variables, LST and precipitation. The study covered the growing season (March to September) of a fifteen year period, between 2000 and 2014, for Bayankhongor province in southwest Mongolia. The datasets used were MODIS NDVI, LST and TRMM Precipitation, which processing and analysis was supported by QGIS software and Python programming language. Monthly anomaly correlations between NDVI-LST and NDVI-Precipitation were generated as well as temporal correlations for the growing season for known drought years (2001, 2002 and 2009). The results show that the three variables follow a seasonal pattern expected for a northern hemisphere region, with occurrence of the rainy season in the summer months. The values of both NDVI and precipitation are remarkably low while LST values are high, which is explained by the region’s climate and ecosystems. The NDVI average, generally, reached higher values with high precipitation values and low LST values. The year of 2001 was the driest year of the time-series, while 2003 was the wet year with healthier vegetation. Monthly correlations registered weak results with low significance, with exception of NDVI-LST and NDVI-Precipitation correlations for June, July and August of 2002. The temporal correlations for the growing season also revealed weak results. The overall relationship between the variables anomalies showed weak correlation results with low significance, which suggests that an accurate answer for predicting drought using the relation between NDVI, LST and Precipitation cannot be given. Additional research should take place in order to achieve more conclusive results. However the NDVI anomaly images show that NDVI is a suitable drought index for Bayankhongor province.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.