965 resultados para object modeling from images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dinoflagellate cysts were analysed from IMAGES core MD952042 (37°48?N; 10°01?W) retrieved from the Tagus Abyssal Plain. Previous results of stable isotope and magnetic susceptibility measurements as well as of planktonic foraminiferal temperature reconstruction from this core, suggest the occurrence of "Heinrich-like events" (i.e. large ice-sheet decay) during Marine Isotopic Stage 5 (MIS 5). Dinoflagellate assemblages of this time period have revealed six dinocyst events that are characterised by peaks in Bitectatodinium tepikiense percentages. These events occur synchronously with "Heinrich-like events" previously identified. They are coeval with major retreats of the forest on land, indicating, therefore, drastic changes in the regional climate. However, results from the Ice-Rafted Detritus (IRD) analysis of the >150 ?m lithic fraction shows that MIS 5 of MD952042 has only recorded one significant input of iceberg discharge, located at the MIS 6/MIS 5 transition. It seems therefore that it is the only event that could be called a "true Heinrich event".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research covers the topic of social housing and its relation to thermal comfort, so applied to an architectural and urban intervention in land situated in central urban area of Macaíba/RN, Brazil. Reflecting on the role of design and use of alternative building materials in the search for better performance is one of its main goals. The hypothesis is that by changing design parameters and choice of materials, it is possible to achieve better thermal performance results. Thus, we performed computer simulations of thermal performance and natural ventilation using computational fluid dynamics or CFD (Computational Fluid Dynamics). The presentation of the thermal simulation followed the methodology proposed in the dissertation Negreiros (2010), which aims to find the percentage of the amount of hours of comfort obtained throughout the year, while data analysis was made of natural ventilation from images generated by the images extracted from the CFD. From model building designed, was fitted an analytical framework that results in a comparison between three different proposals for dwellings housing model, which is evaluated the question of the thermal performance of buildings, and also deals with the spatial variables design, construction materials and costs. It is concluded that the final report confirmed the general hypotheses set at the start of the study, it was possible to quantify the results and identify the importance of design and construction materials are equivalent, and that, if combined, lead to gains in thermal performance potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se ha realizado el modelado orientado a objetos del sistema de control cardiovascular en situaciones de diálisis aplicando una analogía eléctrica en el que se emplean componentes conectados mediante interconexiones. En este modelado se representan las ecuaciones diferenciales del sistema cardiovascular y del sistema de control barorreceptor así como las ecuaciones dinámicas del intercambio de fluidos y solutos del sistema hemodializador. A partir de este modelo se ha realizado experiencias de simulación en condiciones normales y situaciones de hemorragias, transfusiones de sangre y de ultrafiltración e infusión de fluido durante tratamiento de hemodiálisis. Los resultados obtenidos muestran en primer lugar la efectividad del sistema barorreceptor para compensar la hipotensión arterial inducida por los episodios de hemorragia y transfusión de sangre. En segundo lugar se muestra la respuesta del sistema de control ante diferentes tasas de ultrafiltración durante la hemodiálisis y se sugieren valores óptimos para la adecuada operación.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA)for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research covers the topic of social housing and its relation to thermal comfort, so applied to an architectural and urban intervention in land situated in central urban area of Macaíba/RN, Brazil. Reflecting on the role of design and use of alternative building materials in the search for better performance is one of its main goals. The hypothesis is that by changing design parameters and choice of materials, it is possible to achieve better thermal performance results. Thus, we performed computer simulations of thermal performance and natural ventilation using computational fluid dynamics or CFD (Computational Fluid Dynamics). The presentation of the thermal simulation followed the methodology proposed in the dissertation Negreiros (2010), which aims to find the percentage of the amount of hours of comfort obtained throughout the year, while data analysis was made of natural ventilation from images generated by the images extracted from the CFD. From model building designed, was fitted an analytical framework that results in a comparison between three different proposals for dwellings housing model, which is evaluated the question of the thermal performance of buildings, and also deals with the spatial variables design, construction materials and costs. It is concluded that the final report confirmed the general hypotheses set at the start of the study, it was possible to quantify the results and identify the importance of design and construction materials are equivalent, and that, if combined, lead to gains in thermal performance potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crosswell data set contains a range of angles limited only by the geometry of the source and receiver configuration, the separation of the boreholes and the depth to the target. However, the wide angles reflections present in crosswell imaging result in amplitude-versus-angle (AVA) features not usually observed in surface data. These features include reflections from angles that are near critical and beyond critical for many of the interfaces; some of these reflections are visible only for a small range of angles, presumably near their critical angle. High-resolution crosswell seismic surveys were conducted over a Silurian (Niagaran) reef at two fields in northern Michigan, Springdale and Coldspring. The Springdale wells extended to much greater depths than the reef, and imaging was conducted from above and from beneath the reef. Combining the results from images obtained from above with those from beneath provides additional information, by exhibiting ranges of angles that are different for the two images, especially for reflectors at shallow depths, and second, by providing additional constraints on the solutions for Zoeppritz equations. Inversion of seismic data for impedance has become a standard part of the workflow for quantitative reservoir characterization. Inversion of crosswell data using either deterministic or geostatistical methods can lead to poor results with phase change beyond the critical angle, however, the simultaneous pre-stack inversion of partial angle stacks may be best conducted with restrictions to angles less than critical. Deterministic inversion is designed to yield only a single model of elastic properties (best-fit), while the geostatistical inversion produces multiple models (realizations) of elastic properties, lithology and reservoir properties. Geostatistical inversion produces results with far more detail than deterministic inversion. The magnitude of difference in details between both types of inversion becomes increasingly pronounced for thinner reservoirs, particularly those beyond the vertical resolution of the seismic. For any interface imaged from above and from beneath, the results AVA characters must result from identical contrasts in elastic properties in the two sets of images, albeit in reverse order. An inversion approach to handle both datasets simultaneously, at pre-critical angles, is demonstrated in this work. The main exploration problem for carbonate reefs is determining the porosity distribution. Images of elastic properties, obtained from deterministic and geostatistical simultaneous inversion of a high-resolution crosswell seismic survey were used to obtain the internal structure and reservoir properties (porosity) of Niagaran Michigan reef. The images obtained are the best of any Niagaran pinnacle reef to date.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a shortage of experimentally determined strains during sheet metal shearing. These kinds of data are a requisite to validate shearing models and to simulate the shearing process. In this work, strain fields were continuously measured during shearing of a medium and a high strength steel sheet, using digital image correlation. Preliminary studies based on finite element simulations, suggested that the effective surface strains are a good approximation of the bulk strains below the surface. The experiments were performed in a symmetric set-up with large stiffness and stable tool clearances, using various combinations of tool clearance and clamping configuration. Due to large deformations, strains were measured from images captured in a series of steps from shearing start to final fracture. Both the Cauchy and Hencky strain measures were considered, but the difference between these were found negligible with the number of increments used (about 20 to 50). Force-displacement curves were also determined for the various experimental conditions. The measured strain fields displayed a thin band of large strain between the tool edges. Shearing with two clamps resulted in a symmetric strain band whereas there was an extended area with large strains around the tool at the unclamped side when shearing with one clamp. Furthermore, one or two cracks were visible on most of the samples close to the tool edges well before final fracture. The fracture strain was larger for the medium strength material compared with the high-strength material and increased with increasing clearance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

tWater use control methods and water resources planning are of high priority. In irrigated agriculture, theright way to save water is to increase water use efficiency through better management. The present workvalidates procedures and methodologies using remote sensing to determine the water availability in thesoil at each moment, giving the opportunity for the application of the water depth strictly necessaryto optimise crop growth (optimum irrigation timing and irrigation amount). The analysis is applied tothe Irrigation District of Divor, Évora, using 7 experimental plots, which are areas irrigated by centre-pivot systems, cultivated to maize. Data were determined from images of the cultivated surface obtainedby satellite and integrated with atmosphere and crop parameters to calculate biophysical indicatorsand indices of water stress in the vegetation—Normalized Difference Vegetation Index (NDVI), Kc, andKcb. Therefore, evapotranspiration (ETc) was estimated and used to calculate crop water requirement,together with the opportunity and the amount of irrigation water to allocate. Although remote sensingdata available from satellite imagery presented some practical constraints, the study could contribute tothe validation of a new methodology that can be used for irrigation management of a large irrigated area,easier and at lower costs than the traditional FAO recommended crop coefficients method. The remotesensing based methodology can also contribute to significant saves of irrigation water.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The job of a historian is to understand what happened in the past, resorting in many cases to written documents as a firsthand source of information. Text, however, does not amount to the only source of knowledge. Pictorial representations, in fact, have also accompanied the main events of the historical timeline. In particular, the opportunity of visually representing circumstances has bloomed since the invention of photography, with the possibility of capturing in real-time the occurrence of a specific events. Thanks to the widespread use of digital technologies (e.g. smartphones and digital cameras), networking capabilities and consequent availability of multimedia content, the academic and industrial research communities have developed artificial intelligence (AI) paradigms with the aim of inferring, transferring and creating new layers of information from images, videos, etc. Now, while AI communities are devoting much of their attention to analyze digital images, from an historical research standpoint more interesting results may be obtained analyzing analog images representing the pre-digital era. Within the aforementioned scenario, the aim of this work is to analyze a collection of analog documentary photographs, building upon state-of-the-art deep learning techniques. In particular, the analysis carried out in this thesis aims at producing two following results: (a) produce the date of an image, and, (b) recognizing its background socio-cultural context,as defined by a group of historical-sociological researchers. Given these premises, the contribution of this work amounts to: (i) the introduction of an historical dataset including images of “Family Album” among all the twentieth century, (ii) the introduction of a new classification task regarding the identification of the socio-cultural context of an image, (iii) the exploitation of different deep learning architectures to perform the image dating and the image socio-cultural context classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background There is a wide variation of recurrence risk of Non-small-cell lung cancer (NSCLC) within the same Tumor Node Metastasis (TNM) stage, suggesting that other parameters are involved in determining this probability. Radiomics allows extraction of quantitative information from images that can be used for clinical purposes. The primary objective of this study is to develop a radiomic prognostic model that predicts a 3 year disease free-survival (DFS) of resected Early Stage (ES) NSCLC patients. Material and Methods 56 pre-surgery non contrast Computed Tomography (CT) scans were retrieved from the PACS of our institution and anonymized. Then they were automatically segmented with an open access deep learning pipeline and reviewed by an experienced radiologist to obtain 3D masks of the NSCLC. Images and masks underwent to resampling normalization and discretization. From the masks hundreds Radiomic Features (RF) were extracted using Py-Radiomics. Hence, RF were reduced to select the most representative features. The remaining RF were used in combination with Clinical parameters to build a DFS prediction model using Leave-one-out cross-validation (LOOCV) with Random Forest. Results and Conclusion A poor agreement between the radiologist and the automatic segmentation algorithm (DICE score of 0.37) was found. Therefore, another experienced radiologist manually segmented the lesions and only stable and reproducible RF were kept. 50 RF demonstrated a high correlation with the DFS but only one was confirmed when clinicopathological covariates were added: Busyness a Neighbouring Gray Tone Difference Matrix (HR 9.610). 16 clinical variables (which comprised TNM) were used to build the LOOCV model demonstrating a higher Area Under the Curve (AUC) when RF were included in the analysis (0.67 vs 0.60) but the difference was not statistically significant (p=0,5147).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Depth estimation from images has long been regarded as a preferable alternative compared to expensive and intrusive active sensors, such as LiDAR and ToF. The topic has attracted the attention of an increasingly wide audience thanks to the great amount of application domains, such as autonomous driving, robotic navigation and 3D reconstruction. Among the various techniques employed for depth estimation, stereo matching is one of the most widespread, owing to its robustness, speed and simplicity in setup. Recent developments has been aided by the abundance of annotated stereo images, which granted to deep learning the opportunity to thrive in a research area where deep networks can reach state-of-the-art sub-pixel precision in most cases. Despite the recent findings, stereo matching still begets many open challenges, two among them being finding pixel correspondences in presence of objects that exhibits a non-Lambertian behaviour and processing high-resolution images. Recently, a novel dataset named Booster, which contains high-resolution stereo pairs featuring a large collection of labeled non-Lambertian objects, has been released. The work shown that training state-of-the-art deep neural network on such data improves the generalization capabilities of these networks also in presence of non-Lambertian surfaces. Regardless being a further step to tackle the aforementioned challenge, Booster includes a rather small number of annotated images, and thus cannot satisfy the intensive training requirements of deep learning. This thesis work aims to investigate novel view synthesis techniques to augment the Booster dataset, with ultimate goal of improving stereo matching reliability in presence of high-resolution images that displays non-Lambertian surfaces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.