897 resultados para Mesh generation from image data
Resumo:
Nowadays, information security is a very important topic. In particular, wireless networks are experiencing an ongoing widespread diffusion, also thanks the increasing number of Internet Of Things devices, which generate and transmit a lot of data: protecting wireless communications is of fundamental importance, possibly through an easy but secure method. Physical Layer Security is an umbrella of techniques that leverages the characteristic of the wireless channel to generate security for the transmission. In particular, the Physical Layer based-Key generation aims at allowing two users to generate a random symmetric keys in an autonomous way, hence without the aid of a trusted third entity. Physical Layer based-Key generation relies on observations of the wireless channel, from which harvesting entropy: however, an attacker might possesses a channel simulator, for example a Ray Tracing simulator, to replicate the channel between the legitimate users, in order to guess the secret key and break the security of the communication. This thesis work is focused on the possibility to carry out a so called Ray Tracing attack: the method utilized for the assessment consist of a set of channel measurements, in different channel conditions, that are then compared with the simulated channel from the ray tracing, to compute the mutual information between the measurements and simulations. Furthermore, it is also presented the possibility of using the Ray Tracing as a tool to evaluate the impact of channel parameters (e.g. the bandwidth or the directivity of the antenna) on the Physical Layer based-Key generation. The measurements have been carried out at the Barkhausen Institut gGmbH in Dresden (GE), in the framework of the existing cooperation agreement between BI and the Dept. of Electrical, Electronics and Information Engineering "G. Marconi" (DEI) at the University of Bologna.
Resumo:
In the near future, the LHC experiments will continue to be upgraded as the LHC luminosity will increase from the design 1034 to 7.5 × 1034, with the HL-LHC project, to reach 3000 × f b−1 of accumulated statistics. After the end of a period of data collection, CERN will face a long shutdown to improve overall performance by upgrading the experiments and implementing more advanced technologies and infrastructures. In particular, ATLAS will upgrade parts of the detector, the trigger, and the data acquisition system. It will also implement new strategies and algorithms for processing and transferring the data to the final storage. This PhD thesis presents a study of a new pattern recognition algorithm to be used in the trigger system, which is a software designed to provide the information necessary to select physical events from background data. The idea is to use the well-known Hough Transform mathematical formula as an algorithm for detecting particle trajectories. The effectiveness of the algorithm has already been validated in the past, independently of particle physics applications, to detect generic shapes in images. Here, a software emulation tool is proposed for the hardware implementation of the Hough Transform, to reconstruct the tracks in the ATLAS Trigger and Data Acquisition system. Until now, it has never been implemented on electronics in particle physics experiments, and as a hardware implementation it would provide overall latency benefits. A comparison between the simulated data and the physical system was performed on a Xilinx UltraScale+ FPGA device.
Resumo:
Unmanned Aerial Vehicle (UAVs) equipped with cameras have been fast deployed to a wide range of applications, such as smart cities, agriculture or search and rescue applications. Even though UAV datasets exist, the amount of open and quality UAV datasets is limited. So far, we want to overcome this lack of high quality annotation data by developing a simulation framework for a parametric generation of synthetic data. The framework accepts input via a serializable format. The input specifies which environment preset is used, the objects to be placed in the environment along with their position and orientation as well as additional information such as object color and size. The result is an environment that is able to produce UAV typical data: RGB image from the UAVs camera, altitude, roll, pitch and yawn of the UAV. Beyond the image generation process, we improve the resulting image data photorealism by using Synthetic-To-Real transfer learning methods. Transfer learning focuses on storing knowledge gained while solving one problem and applying it to a different - although related - problem. This approach has been widely researched in other affine fields and results demonstrate it to be an interesing area to investigate. Since simulated images are easy to create and synthetic-to-real translation has shown good quality results, we are able to generate pseudo-realistic images. Furthermore, object labels are inherently given, so we are capable of extending the already existing UAV datasets with realistic quality images and high resolution meta-data. During the development of this thesis we have been able to produce a result of 68.4% on UAVid. This can be considered a new state-of-art result on this dataset.
Resumo:
Privacy issues and data scarcity in PET field call for efficient methods to expand datasets via synthetic generation of new data that cannot be traced back to real patients and that are also realistic. In this thesis, machine learning techniques were applied to 1001 amyloid-beta PET images, which had undergone a diagnosis of Alzheimer’s disease: the evaluations were 540 positive, 457 negative and 4 unknown. Isomap algorithm was used as a manifold learning method to reduce the dimensions of the PET dataset; a numerical scale-free interpolation method was applied to invert the dimensionality reduction map. The interpolant was tested on the PET images via LOOCV, where the removed images were compared with the reconstructed ones with the mean SSIM index (MSSIM = 0.76 ± 0.06). The effectiveness of this measure is questioned, since it indicated slightly higher performance for a method of comparison using PCA (MSSIM = 0.79 ± 0.06), which gave clearly poor quality reconstructed images with respect to those recovered by the numerical inverse mapping. Ten synthetic PET images were generated and, after having been mixed with ten originals, were sent to a team of clinicians for the visual assessment of their realism; no significant agreements were found either between clinicians and the true image labels or among the clinicians, meaning that original and synthetic images were indistinguishable. The future perspective of this thesis points to the improvement of the amyloid-beta PET research field by increasing available data, overcoming the constraints of data acquisition and privacy issues. Potential improvements can be achieved via refinements of the manifold learning and the inverse mapping stages during the PET image analysis, by exploring different combinations in the choice of algorithm parameters and by applying other non-linear dimensionality reduction algorithms. A final prospect of this work is the search for new methods to assess image reconstruction quality.
Resumo:
A method using the ring-oven technique for pre-concentration in filter paper discs and near infrared hyperspectral imaging is proposed to identify four detergent and dispersant additives, and to determine their concentration in gasoline. Different approaches were used to select the best image data processing in order to gather the relevant spectral information. This was attained by selecting the pixels of the region of interest (ROI), using a pre-calculated threshold value of the PCA scores arranged as histograms, to select the spectra set; summing up the selected spectra to achieve representativeness; and compensating for the superimposed filter paper spectral information, also supported by scores histograms for each individual sample. The best classification model was achieved using linear discriminant analysis and genetic algorithm (LDA/GA), whose correct classification rate in the external validation set was 92%. Previous classification of the type of additive present in the gasoline is necessary to define the PLS model required for its quantitative determination. Considering that two of the additives studied present high spectral similarity, a PLS regression model was constructed to predict their content in gasoline, while two additional models were used for the remaining additives. The results for the external validation of these regression models showed a mean percentage error of prediction varying from 5 to 15%.
Resumo:
The VISTA near infrared survey of the Magellanic System (VMC) will provide deep YJK(s) photometry reaching stars in the oldest turn-off point throughout the Magellanic Clouds (MCs). As part of the preparation for the survey, we aim to access the accuracy in the star formation history (SFH) that can be expected from VMC data, in particular for the Large Magellanic Cloud (LMC). To this aim, we first simulate VMC images containing not only the LMC stellar populations but also the foreground Milky Way (MW) stars and background galaxies. The simulations cover the whole range of density of LMC field stars. We then perform aperture photometry over these simulated images, access the expected levels of photometric errors and incompleteness, and apply the classical technique of SFH-recovery based on the reconstruction of colour-magnitude diagrams (CMD) via the minimisation of a chi-squared-like statistics. We verify that the foreground MW stars are accurately recovered by the minimisation algorithms, whereas the background galaxies can be largely eliminated from the CMD analysis due to their particular colours and morphologies. We then evaluate the expected errors in the recovered star formation rate as a function of stellar age, SFR(t), starting from models with a known age-metallicity relation (AMR). It turns out that, for a given sky area, the random errors for ages older than similar to 0.4 Gyr seem to be independent of the crowding. This can be explained by a counterbalancing effect between the loss of stars from a decrease in the completeness and the gain of stars from an increase in the stellar density. For a spatial resolution of similar to 0.1 deg(2), the random errors in SFR(t) will be below 20% for this wide range of ages. On the other hand, due to the lower stellar statistics for stars younger than similar to 0.4 Gyr, the outer LMC regions will require larger areas to achieve the same level of accuracy in the SFR( t). If we consider the AMR as unknown, the SFH-recovery algorithm is able to accurately recover the input AMR, at the price of an increase of random errors in the SFR(t) by a factor of about 2.5. Experiments of SFH-recovery performed for varying distance modulus and reddening indicate that these parameters can be determined with (relative) accuracies of Delta(m-M)(0) similar to 0.02 mag and Delta E(B-V) similar to 0.01 mag, for each individual field over the LMC. The propagation of these errors in the SFR(t) implies systematic errors below 30%. This level of accuracy in the SFR(t) can reveal significant imprints in the dynamical evolution of this unique and nearby stellar system, as well as possible signatures of the past interaction between the MCs and the MW.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
Aquatic humic substances (AHS) isolated from two characteristic seasons of the Negro river, winter and summer corresponding to floody and dry periods, were structurally characterized by (13)C nuclear magnetic ressonance. Subsequently, AHS aqueous solutions were irradiated with a polychromatic lamp (290-475 nm) and monitored by its total organic carbon (TOC) content, ultraviolet-visible (UV-vis) absorbance, fluorescence and Fourier transformed infrared spectroscopy (FTIR). As a result, a photobleaching upto 80% after irradiation of 48 h was observed. Conformational rearrangements and formation of low molecular complexity structures were formed during the irradiation, as deduced from the pH decrement and the fluorescence shifting to lower wavelengths. Additionally a significant mineralization with the formation Of CO(2), CO, and inorganic carbon compounds was registered, as assumed by TOC losses of up to 70%. The differences in photodegradation between samples expressed by photobleaching efficiency were enhanced in the summer sample and related to its elevated aromatic content. Aromatic structures are assumed to have high autosensitization capacity effects mediated by the free radical generation from quinone and phenolic moieties.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The solution structure of robustoxin, the lethal neurotoxin from the Sydney funnel-web spider Atrax robustus, has been determined from 2D H-1 NMR data, Robustoxin is a polypeptide of 42 residues cross-linked by four disulphide bonds, the connectivities of which were determined from NMR data and trial structure calculations to be 1-15, 8-20, 14-31 and 16-42 (a 1-4/2-6/3-7/5-8 pattern), The structure consists of a small three-stranded, anti-parallel beta-sheet and a series of interlocking gamma-turns at the C-terminus. It also contains a cystine knot, thus placing it in the inhibitor cystine knot motif family of structures, which includes the omega-conotoxins and a number of plant and animal toxins and protease inhibitors. Robustoxin contains three distinct charged patches on its surface, and an extended loop that includes several aromatic and non-polar residues, Both of these structural features may play a role in its binding to the voltage-gated sodium channel. (C) 1997 Federation of European Biochemical Societies.
Resumo:
There is concern that Pacific Island economies dependent on remittances of migrants will endure foreign exchange shortages and falling living standards as remittance levels fall because of lower migration rates and the belief that migrants' willingness to remit declines over time. The empirical validity of the remittance-decay hypothesis has never been tested. From survey data on Tongan and Western Samoan migrants in Sydney, this paper estimates remittance functions using multivariate regression analysis. It is found that the remittance-decay hypothesis has no empirical validity, and migrants are motivated by factors other than altruistic family support, including asset accumulation and investment back home.
Resumo:
Two major factors are likely to impact the utilisation of remotely sensed data in the near future: (1)an increase in the number and availability of commercial and non-commercial image data sets with a range of spatial, spectral and temporal dimensions, and (2) increased access to image display and analysis software through GIS. A framework was developed to provide an objective approach to selecting remotely sensed data sets for specific environmental monitoring problems. Preliminary applications of the framework have provided successful approaches for monitoring disturbed and restored wetlands in southern California.
Resumo:
Open system pyrolysis (heating rate 10 degrees C/min) of coal maturity (vitrinite reflectance, VR) sequence (0.5%, 0.8% and 1.4% VR) demonstrates that there are two stages of thermogenic methane generation from Bowen Basin coals. The first and major stage shows a steady increase in methane generation maximising at 570 degrees C, corresponding to a VR of 2-2.5%. This is followed by a less intense methane generation which has not as yet maximised by 800 degrees C (equivalent to VR of 5%). Heavier (C2+) hydrocarbons are generated up to 570 degrees C after which only the C-1 (CH4, CO and CO2) gases are produced. The main phase of heavy hydrocarbon generation occurs between 420 and 510 degrees C. Over this temperature range,methane generation accounts for only a minor component, whereas the wet gases (C-2-C-5) are either in equal abundance or are more abundant by a factor of two than the liquid hydrocarbons. The yields of non-hydrocarbon gases CO2 and CO are greater then methane during the early stages of gas generation from an immature coal, subordinate to methane during the main phase of methane generation after which they are again dominant. Compositional data for desorbed and produced coal seam gases from the Bowen show that CO2 and wet gases are a minor component. This discrepancy between the proportion of wet gas components produced during open system pyrolysis and that observed in naturally matured coals may be the result of preferential migration of wet gas components, by dilution of methane generated during secondary cracking of bitumen, or kinetic effects associated with different activations for production of individual hydrocarbon gases. Extrapolation of results of artificial pyrolysis of the main organic components in coal to geological significant heating rates suggests that isotopically light methane to delta(13)C of -50 parts per thousand can be generated. Carbon isotope depletions in C-13 are further enhanced, however, as a result of trapping of gases over selected rank levels (instantaneous generation) which is a probable explanation for the range of delta(13)C values we have recorded in methane desorbed from Bowen Basin coals (-51 +/- 9 parts per thousand). Pervasive carbonate-rich veins in Bowen Basin coals are the product of magmatism-related hydrothermal activity. Furthermore, the pyrolysis results suggest an additional organic carbon source front CO2 released at any stage during the maturation history could mix in varying proportions with CO2 from the other sources. This interpretation is supported by C and O isotopic ratios, of carbonates that indicate mixing between magmatic and meteoric fluids. Also, the steep slope of the C and O isotope correlation trend suggests that the carbonates were deposited over a very narrow temperature interval basin-wide, or at relatively high temperatures (i.e., greater than 150 degrees C) where mineral-fluid oxygen isotope fractionations are small. These temperatures are high enough for catagenic production of methane and higher hydrocarbons from the coal and coal-derived bitumen. The results suggests that a combination of thermogenic generation of methane and thermodynamic processes associated with CH4/CO2 equilibria are the two most important factors that control the primary isotope and molecular composition of coal seam gases in the Bowen Basin. Biological process are regionally subordinate but may be locally significant. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Traditional field sampling approaches for ecological studies of restored habitat can only cover small areas in detail, con be time consuming, and are often invasive and destructive. Spatially extensive and non-invasive remotely sensed data can make field sampling more focused and efficient. The objective of this work was to investigate the feasibility and accuracy of hand-held and airborne remotely sensed data to estimate vegetation structural parameters for an indicator plant species in a restored wetland. High spatial resolution, digital, multispectral camera images were captured from an aircraft over Sweetwater Marsh (San Diego County, California) during each growing season between 1992-1996. Field data were collected concurrently, which included plant heights, proportional ground cover and canopy architecture type, and spectral radiometer measurements. Spartina foliosa (Pacific cordgrass) is the indicator species for the restoration monitoring. A conceptual model summarizing the controls on the spectral reflectance properties of Pacific cordgrass was established. Empirical models were developed relating the stem length, density, and canopy architecture of cordgrass to normalized-difference-vegetation-index values. The most promising results were obtained from empirical estimates of total ground cover using image data that had been stratified into high, middle, and low marsh zones. As part of on-going restoration monitoring activities, this model is being used to provide maps of estimated vegetation cover.