912 resultados para Colour and image sensitive detectors
Resumo:
The development and study of detectors sensitive to flammable combustible and toxic gases at low cost is a crucial technology challenge to enable marketable versions to the market in general. Solid state sensors are attractive for commercial purposes by the strength and lifetime, because it isn t consumed in the reaction with the gas. In parallel, the use of synthesis techniques more viable for the applicability on an industrial scale are more attractive to produce commercial products. In this context ceramics with spinel structure were obtained by microwave-assisted combustion for application to flammable fuel gas detectors. Additionally, alternatives organic-reducers were employed to study the influence of those in the synthesis process and the differences in performance and properties of the powders obtained. The organic- reducers were characterized by Thermogravimetry (TG) and Derivative Thermogravimetry (DTG). After synthesis, the samples were heat treated and characterized by Fourier Transform Infrared Spectroscopy (FTIR), X-ray Diffraction (XRD), analysis by specific area by BET Method and Scanning Electron Microscopy (SEM). Quantification of phases and structural parameters were carried through Rietveld method. The methodology was effective to obtain Ni-Mn mixed oxides. The fuels influenced in obtaining spinel phase and morphology of the samples, however samples calcined at 950 °C there is just the spinel phase in the material regardless of the organic-reducer. Therefore, differences in performance are expected in technological applications when sample equal in phase but with different morphologies are tested
Resumo:
The cultivated strawberry (Fragaria x ananassa) is the berry fruit most consumed worldwide and is well-known for its delicate flavour and nutritional properties. However, fruit quality attributes have been lost or reduced after years of traditional breeding focusing mainly on agronomical traits. To face the obstacles encountered in the improvement of cultivated crops, new technological tools, such as genomics and high throughput metabolomics, are becoming essential for the identification of genetic factors responsible of organoleptic and nutritive traits. Integration of “omics” data will allow a better understanding of the molecular and genetic mechanisms underlying the accumulation of metabolites involved in the flavour and nutritional value of the fruit. To identify genetic components affecting/controlling? fruit metabolic composition, here we present a quantitative trait loci (QTL) analysis using a 95 F1 segregating population derived from genotypes ‘1392’, selected for its superior flavour, and ‘232’ selected based in high yield (Zorrilla-Fontanesi et al., 2011; Zorrilla-Fontanesi et al., 2012). Metabolite profiling was performed on red stage strawberry fruits using gas chromatography hyphenated to time-of-flight mass spectrometry, which is a rapid and highly sensitive approach, allowing a good coverage of the central pathways of primary metabolism. Around 50 primary metabolites, including sugars, sugars derivatives, amino and organic acids, were detected and quantified after analysis in each individual of the population. QTL mapping was performed on the ‘232’ x ‘1392’ population separately over two successive years, based on the integrated linkage map (Sánchez-Sevilla et al., 2015). First, significant associations between metabolite content and molecular markers were identified by the non-parametric test of Kruskal-Wallis. Then, interval mapping (IM), as well as the multiple QTL method (MQM) allowed the identification of QTLs in octoploid strawberry. A permutation test established LOD thresholds for each metabolite and year. A total of 132 QTLs were detected in all the linkage groups over the two years for 42 metabolites out of 50. Among them, 4 (9.8%) QTLs for sugars, 9 (25%) for acids and 7 (12.7%) for amino acids were stable and detected in the two successive years. We are now studying the QTLs regions in order to find candidate genes to explain differences in metabolite content in the different individuals of the population, and we expect to identify associations between genes and metabolites which will help us to understand their role in quality traits of strawberry fruit.
Resumo:
Despite the efforts to better manage biosolids field application programs, biosolids managers still lack of efficient and reliable tools to apply large quantities of material while avoiding odor complaints. Objectives of this research were to determine the capabilities of an electronic nose in supporting process monitoring of biosolids production and, to compare odor characteristics of biosolids produced through thermal-hydrolysis anaerobic digestion (TH-AD) to those of alkaline stabilization in the plant, under storage and in the field. A method to quantify key odorants was developed and full scale sampling and laboratory simulations were performed. The portable electronic nose (PEN3) was tested for its capabilities of distinguishing alkali dosages in the biosolids production process. Frequency of recognition of unknown samples was tested achieving highest accuracy of 81.1%. This work exposed the need for a different and more sensitive electronic nose to assure its applicability at full scale for this process. GC-MS results were consistent with those reported in literature and helped to elucidate the behavior of the pattern recognition of the PEN3. Odor characterization of TH-AD and alkaline stabilized biosolids was achieved using olfactometry measurements and GC-MS. Dilution-to-threshold of TH-AD biosolids increased under storage conditions but no correlation was found with the target compounds. The presence of furan and three methylated homologues in TH-AD biosolids was reported for the first time proposing that these compounds are produced during thermal hydrolysis process however, additional research is needed to fully describe the formation of these compounds and the increase in odors. Alkaline stabilized biosolids reported similar odor concentration but did not increase and the ‘fishy’ odor from trimethylamine emissions resulted in more offensive and unpleasant odors when compared to TH-AD. Alkaline stabilized biosolids showed a spike in sulfur and trimethylamine after 3 days of field application when the alkali addition was not sufficient to meet regulatory standards. Concentrations of target compounds from field application of TH-AD biosolids gradually decreased to below the odor threshold after 3 days. This work increased the scientific understanding on odor characteristics and behavior of two types of biosolids and on the application of electronic noses to the environmental engineering field.
Resumo:
The response regulator RpaB (regulator of phycobilisome associated B), part of an essential two-component system conserved in cyanobacteria that responds to multiple environmental signals, has recently been implicated in the control of cell dimensions and of circadian rhythms of gene expression in the model cyanobacterium Synechococcus elongatus PCC 7942. However, little is known of the molecular mechanisms that underlie RpaB functions. In this study we show that the regulation of phenotypes by RpaB is intimately connected with the activity of RpaA (regulator of phycobilisome associated A), the master regulator of circadian transcription patterns. RpaB affects RpaA activity both through control of gene expression, a function requiring an intact effector domain, and via altering RpaA phosphorylation, a function mediated through the N-terminal receiver domain of RpaB. Thus, both phosphorylation cross-talk and coregulation of target genes play a role in the genetic interactions between the RpaA and RpaB pathways. In addition, RpaB∼P levels appear critical for survival under light:dark cycles, conditions in which RpaB phosphorylation is environmentally driven independent of the circadian clock. We propose that the complex regulatory interactions between the essential and environmentally sensitive NblS-RpaB system and the SasA-RpaA clock output system integrate relevant extra- and intracellular signals to the circadian clock.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
In this study 15 hazelnut varieties existing in a collection of Viseu Agricultural Station were evaluated. The nuts were studied in respect of their morphological characteristics, such as fruit and kernel weight, index of compression and of shape and shell thickness. The study was complemented with analysis of physical properties such as colour and texture, and the determination of moisture content and water activity, given the importance that these parameters take in the conservation capacity of the fruits. All experiments followed standard methods, being also used the following equipment: texturometer, colorimeter and hygrometer. The results obtained allowed to know the expectable ranges for each color parameters in the shell, film and kernels: L*, a*, b* chroma and hue, having been found statistically significant differences among the cultivars studied. As regards the textural parameters evaluated by crust crushing and crumb cutting tests (hardness, friability and resilience) there were also significant differences. Evaluation of moisture was of great importance because confirmed that the solar drying, used to extract the excess of moisture from the fruits, was sufficient to reach low values, between 1.66% and 4.52%, being so a guarantee of preservation.
Resumo:
Kiwi fruit is a highly nutritional fruit due to the high level of vitamin C and its strong antioxidant capacity due to a wide number of phytonutrients including carotenoids, lutein, phenolics, flavonoids and chlorophyll [1]. Drying consists of a complex process in which simultaneous heat and mass transfer occur. Several alterations occur during the drying of foods at many levels (physical, chemical, nutritional or sensorial) which are influenced by a number of factors, including processing conditions [2]. Temperature is particularly important because of the effects it produces at the chemical and also at the physical level, particularly colour and texture [3]. In the present work were evaluated the changes in sliced kiwi when exposed to air drying at different temperatures (50, 60, 70, 80 ºC), namely in terms of some chemical properties like ascorbic acid or phenolic compounds, physical characteristics like colour and texture and also at the sensorial level. All experiments followed standard established procedures and several replicates were done to assess each property. The results obtained indicated that moisture was reduced with drying by 74 to 87%, depending on the temperature. Also ascorbic acid decreased with drying, being 7% for 50 ºC and increasing up to 28% for the highest temperature (80 ºC). The phenolic compounds and antioxidant activity were also very much affected by the drying temperature. The water activity of the dried samples varied from 0.658 to 0.753, being compatible with a good preservation. Regarding colour, the total colour difference between the dried samples and the fresh sample was found to vary in the range 9.45 – 17.17. The textural parameters were also much affected by drying, namely hardness which decreased by 45 to 72 %, and all other parameters increased: cohesiveness (approximately doubled), springiness (increased 2 to 3 times) and chewiness which increased up to 2.5 times that off the fresh sample. Adhesiveness, which was observed for the fresh samples (-4.02 N.s) disappeared in all the dried samples. The sensorial analysis made to the dried samples allowed establishing the sensorial profiles as shown in Figure 1.
Resumo:
Grego´rio Lopes (c. 1490–1550) was one of the most prominent painters of the renaissance and Mannerism in Portugal. The painting “Mater Misericordiae” made for the Sesimbra Holy House of Mercy, circa 1535–1538, is one of the most significant works of the artist, and his only painting on this theme, being also one of the most significant Portuguese paintings of sixteenth century. The recent restoration provided the possibility to study materially the painting for the first time, with a multianalytical methodology incorporating portable energy-dispersive X-ray fluorescence spectroscopy, scanning electron microscopy–energy-dispersive spectroscopy, micro-X-ray diffraction, micro-Raman spectroscopy and high-performance liquid chromatography coupled to diode array and mass spectrometry detectors. The analytical study was complemented by infrared reflectography, allowing the study of the underdrawing technique and also by dendrochronology to confirm the date of the wooden panels (1535–1538). The results of this study were compared with previous ones on the painter’s workshop, and significant differences and similitudes were found in the materials and techniques used
Resumo:
We studied the Paraíba do Sul river watershed , São Paulo state (PSWSP), Southeastern Brazil, in order to assess the land use and cover (LULC) and their implication s to the amount of carbon (C) stored in the forest cover between the years 1985 and 2015. Th e region covers a n area of 1,395,975 ha . We used images made by the Operational Land Imager (OLI) sensor (OLI/Landsat - 8) to produce mappings , and image segmentation techniques to produce vectors with homogeneous characteristics. The training samples and the samples used for classification and validation were collected from the segmented image. To quantify the C stocked in aboveground live biomass (AGLB) , we used an indirect method and applied literature - based reference values. The recovery of 205,690 ha of a secondary Native Forest (NF) after 1985 sequestered 9.7 Tg (Teragram) of C . Considering the whole NF area (455,232 ha), the amount of C accumulated al ong the whole watershed was 3 5 .5 Tg , and the whole Eucalyptus crop (EU) area (113,600 ha) sequester ed 4. 4 Tg of C. Thus, the total amount of C sequestered in the whole watershed (NF + EU) was 3 9 . 9 Tg of C or 1 45 . 6 Tg of CO 2 , and the NF areas were responsible for the large st C stock at the watershed (8 9 %). Therefore , the increase of the NF cover contribut es positively to the reduction of CO 2 concentration in the atmosphere, and Reducing Emissions from Deforestation and Forest Degradation (REDD + ) may become one of the most promising compensation mechanisms for the farmers who increased forest cover at their farms.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
The evolution of modern and increasingly sensitive image sensors, the increasingly compact design of the cameras, and the recent emergence of low-cost cameras allowed the Underwater Photogrammetry to become an infallible and irreplaceable technique used to estimate the structure of the seabed with high accuracy. Within this context, the main topic of this work is the Underwater Photogrammetry from a geomatic point of view and all the issues associated with its implementation, in particular with the support of Unmanned Underwater Vehicles. Questions such as: how does the technique work, what is needed to deal with a proper survey, what tools are available to apply this technique, and how to resolve uncertainties in measurement will be the subject of this thesis. The study conducted can be divided into two major parts: one devoted to several ad-hoc surveys and tests, thus a practical part, another supported by the bibliographical research. However the main contributions are related to the experimental section, in which two practical case studies are carried out in order to improve the quality of the underwater survey of some calibration platforms. The results obtained from these two experiments showed that, the refractive effects due to water and underwater housing can be compensated by the distortion coefficients in the camera model, but if the aim is to achieve high accuracy then a model that takes into account the configuration of the underwater housing, based on ray tracing, must also be coupled. The major contributions that this work brought are: an overview of the practical issues when performing surveys exploiting an UUV prototype, a method to reach a reliable accuracy in the 3D reconstructions without the use of an underwater local geodetic network, a guide for who addresses underwater photogrammetry topics for the first time, and the use of open-source environments.
Resumo:
Cleaning is one of the most important and delicate procedures that are part of the restoration process. When developing new systems, it is fundamental to consider its selectivity towards the layer to-be-removed, non-invasiveness towards the one to-be-preserved, its sustainability and non-toxicity. Besides assessing its efficacy, it is important to understand its mechanism by analytical protocols that strike a balance between cost, practicality, and reliable interpretation of results. In this thesis, the development of cleaning systems based on the coupling of electrospun fabrics (ES) and greener organic solvents is proposed. Electrospinning is a versatile technique that allows the production of micro/nanostructured non-woven mats, which have already been used as absorbents in various scientific fields, but to date, not in the restoration field. The systems produced proved to be effective for the removal of dammar varnish from paintings, where the ES not only act as solvent-binding agents but also as adsorbents towards the partially solubilised varnish due to capillary rise, thus enabling a one-step procedure. They have also been successfully applied for the removal of spray varnish from marble substrates and wall paintings. Due to the materials' complexity, the procedure had to be adapted case-by-case and mechanical action was still necessary. According to the spinning solution, three types of ES mats have been produced: polyamide 6,6, pullulan and pullulan with melanin nanoparticles. The latter, under irradiation, allows for a localised temperature increase accelerating and facilitating the removal of less soluble layers (e.g. reticulated alkyd-based paints). All the systems produced, and the mock-ups used were extensively characterised using multi-analytical protocols. Finally, a monitoring protocol and image treatment based on photoluminescence macro-imaging is proposed. This set-up allowed the study of the removal mechanism of dammar varnish and semi-quantify its residues. These initial results form the basis for optimising the acquisition set-up and data processing.
Resumo:
In recent years, we have witnessed great changes in the industrial environment as a result of the innovations introduced by Industry 4.0, especially in the integration of Internet of Things, Automation and Robotics in the manufacturing field. The project presented in this thesis lies within this innovation context and describes the implementation of an Image Recognition application focused on the automotive field. The project aims at helping the supply chain operator to perform an effective and efficient check of the homologation tags present on vehicles. The user contribution consists in taking a picture of the tag and the application will automatically, exploiting Amazon Web Services, return the result of the control about the correctness of the tag, the correct positioning within the vehicle and the presence of faults or defects on the tag. To implement this application we ombined two IoT platforms widely used in industrial field: Amazon Web Services(AWS) and ThingWorx. AWS exploits Convolutional Neural Networks to perform Text Detection and Image Recognition, while PTC ThingWorx manages the user interface and the data manipulation.
Resumo:
The High Energy Rapid Modular Ensemble of Satellites (HERMES) is a new mission concept involving the development of a constellation of six CubeSats in low Earth orbit with new miniaturized instruments that host a hybrid Silicon Drift Detector/GAGG:Ce based system for X-ray and γ-ray detection, aiming to monitor high-energy cosmic transients, such as Gamma Ray Bursts and the electromagnetic counterparts of gravitational wave events. The HERMES constellation will also operate together with the Australian-Italian SpIRIT mission, which will house a HERMES-like detector. The HERMES pathfinder mini-constellation, consisting of six satellites plus SpIRIT, is likely to be launched in 2023. The HERMES detectors are based on the heritage of the Italian ReDSoX collaboration, with joint design and production by INFN-Trieste and Fondazione Bruno Kessler, and the involvement of several Italian research institutes and universities. An application-specific, low-noise, low-power integrated circuit (ASIC) called LYRA was conceived and designed for the HERMES readout electronics. My thesis project focuses on the ground calibrations of the first HERMES and SpIRIT flight detectors, with a performance assessment and characterization of the detectors. The first part of this work addresses measurements and experimental tests on laboratory prototypes of the HERMES detectors and their front-end electronics, while the second part is based on the design of the experimental setup for flight detector calibrations and related functional tests for data acquisition, as well as the development of the calibration software. In more detail, the calibration parameters (such as the gain of each detector channel) are determined using measurements with radioactive sources, performed at different operating temperatures between -20°C and +20°C by placing the detector in a suitable climate chamber. The final part of the thesis involves the analysis of the calibration data and a discussion of the results.
Resumo:
We report the first measurements of the moments--mean (M), variance (σ(2)), skewness (S), and kurtosis (κ)--of the net-charge multiplicity distributions at midrapidity in Au+Au collisions at seven energies, ranging from sqrt[sNN]=7.7 to 200 GeV, as a part of the Beam Energy Scan program at RHIC. The moments are related to the thermodynamic susceptibilities of net charge, and are sensitive to the location of the QCD critical point. We compare the products of the moments, σ(2)/M, Sσ, and κσ(2), with the expectations from Poisson and negative binomial distributions (NBDs). The Sσ values deviate from the Poisson baseline and are close to the NBD baseline, while the κσ(2) values tend to lie between the two. Within the present uncertainties, our data do not show nonmonotonic behavior as a function of collision energy. These measurements provide a valuable tool to extract the freeze-out parameters in heavy-ion collisions by comparing with theoretical models.