30 resultados para 671304 Data, image and text equipment
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
High-resolution side scan sonar has been used for mapping the seafloor of the Ría de Pontevedra. Four backscatter patterns have been mapped within the Ría: (1) Pattern with isolated reflections, correlated with granite and metamorphic outcrops and located close to the coastal prominence and Ons and Onza Islands. (2) Pattern of strong reflectivity usually located around the basement outcrops and near the coastline and produced by coarse-grained sediment. (3) Pattern of weak backscatter is correlated with fine sand to mud and comprising large areas in the central and deep part of the Ría, where the bottom currents are weak. It is generally featureless, except where pockmarks and anthropogenic features are present. (4) Patches of strong and weak backscatter are located in the boundary between coarse and fine-grained sediments and they are due to the effect of strong bottom currents. The presence of megaripples associated to both patterns of strong reflectivity and sedimentary patches indicate bedload transport of sediment during high energy conditions (storms). Side scan sonar records and supplementary bathymetry, bottom samples and hydrodynamic data reveal that the distribution of seafloor sediment is strongly related to oceanographic processes and the particular morphology and topography of the Ría.
Resumo:
The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
The Powell Basin is a small oceanic basin located at the NE end of the Antarctic Peninsula developed during the Early Miocene and mostly surrounded by the continental crusts of the South Orkney Microcontinent, South Scotia Ridge and Antarctic Peninsula margins. Gravity data from the SCAN 97 cruise obtained with the R/V Hespérides and data from the Global Gravity Grid and Sea Floor Topography (GGSFT) database (Sandwell and Smith, 1997) are used to determine the 3D geometry of the crustal-mantle interface (CMI) by numerical inversion methods. Water layer contribution and sedimentary effects were eliminated from the Free Air anomaly to obtain the total anomaly. Sedimentary effects were obtained from the analysis of existing and new SCAN 97 multichannel seismic profiles (MCS). The regional anomaly was obtained after spectral and filtering processes. The smooth 3D geometry of the crustal mantle interface obtained after inversion of the regional anomaly shows an increase in the thickness of the crust towards the continental margins and a NW-SE oriented axis of symmetry coinciding with the position of an older oceanic spreading axis. This interface shows a moderate uplift towards the western part and depicts two main uplifts to the northern and eastern sectors.
Resumo:
This article describes a method for determining the polydispersity index Ip2=Mz/Mw of the molecular weight distribution (MWD) of linear polymeric materials from linear viscoelastic data. The method uses the Mellin transform of the relaxation modulus of a simple molecular rheological model. One of the main features of this technique is that it enables interesting MWD information to be obtained directly from dynamic shear experiments. It is not necessary to achieve the relaxation spectrum, so the ill-posed problem is avoided. Furthermore, a determinate shape of the continuous MWD does not have to be assumed in order to obtain the polydispersity index. The technique has been developed to deal with entangled linear polymers, whatever the form of the MWD is. The rheological information required to obtain the polydispersity index is the storage G′(ω) and loss G″(ω) moduli, extending from the terminal zone to the plateau region. The method provides a good agreement between the proposed theoretical approach and the experimental polydispersity indices of several linear polymers for a wide range of average molecular weights and polydispersity indices. It is also applicable to binary blends.
Resumo:
Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.
Resumo:
Trabajo de investigación que realiza un estudio clasificatorio de las asignaturas matriculadas en la carrera de Administración y Dirección de Empresas de la UOC en relación a su resultado. Se proponen diferentes métodos y modelos de comprensión del entorno en el que se realiza el estudio.
Resumo:
Dissolved organic matter (DOM) is a complex mixture of organic compounds, ubiquitous in marine and freshwater systems. Fluorescence spectroscopy, by means of Excitation-Emission Matrices (EEM), has become an indispensable tool to study DOM sources, transport and fate in aquatic ecosystems. However the statistical treatment of large and heterogeneous EEM data sets still represents an important challenge for biogeochemists. Recently, Self-Organising Maps (SOM) has been proposed as a tool to explore patterns in large EEM data sets. SOM is a pattern recognition method which clusterizes and reduces the dimensionality of input EEMs without relying on any assumption about the data structure. In this paper, we show how SOM, coupled with a correlation analysis of the component planes, can be used both to explore patterns among samples, as well as to identify individual fluorescence components. We analysed a large and heterogeneous EEM data set, including samples from a river catchment collected under a range of hydrological conditions, along a 60-km downstream gradient, and under the influence of different degrees of anthropogenic impact. According to our results, chemical industry effluents appeared to have unique and distinctive spectral characteristics. On the other hand, river samples collected under flash flood conditions showed homogeneous EEM shapes. The correlation analysis of the component planes suggested the presence of four fluorescence components, consistent with DOM components previously described in the literature. A remarkable strength of this methodology was that outlier samples appeared naturally integrated in the analysis. We conclude that SOM coupled with a correlation analysis procedure is a promising tool for studying large and heterogeneous EEM data sets.
Resumo:
Marketing scholars have suggested a need for more empirical research on consumer response to malls, in order to have a better understanding of the variables that explain the behavior of the consumers. The segmentation methodology CHAID (Chi-square automatic interaction detection) was used in order to identify the profiles of consumers with regard to their activities at malls, on the basis of socio-demographic variables and behavioral variables (how and with whom they go to the malls). A sample of 790 subjects answered an online questionnaire. The CHAID analysis of the results was used to identify the profiles of consumers with regard to their activities at malls. In the set of variables analyzed the transport used in order to go shopping and the frequency of visits to centers are the main predictors of behavior in malls. The results provide guidelines for the development of effective strategies to attract consumers to malls and retain them there.
Resumo:
Transmission electron microscopy is a proven technique in the field of cell biology and a very useful tool in biomedical research. Innovation and improvements in equipment together with the introduction of new technology have allowed us to improve our knowledge of biological tissues, to visualizestructures better and both to identify and to locate molecules. Of all the types ofmicroscopy exploited to date, electron microscopy is the one with the mostadvantageous resolution limit and therefore it is a very efficient technique fordeciphering the cell architecture and relating it to function. This chapter aims toprovide an overview of the most important techniques that we can apply to abiological sample, tissue or cells, to observe it with an electron microscope, fromthe most conventional to the latest generation. Processes and concepts aredefined, and the advantages and disadvantages of each technique are assessedalong with the image and information that we can obtain by using each one ofthem.
Resumo:
We construct estimates of educational attainment for a sample of OECD countries using previously unexploited sources. We follow a heuristic approach to obtain plausible time profiles for attainment levels by removing sharp breaks in the data that seem to reflect changes in classification criteria. We then construct indicators of the information content of our series and a number of previously available data sets and examine their performance in several growth specifications. We find a clear positive correlation between data quality and the size and significance of human capital coefficients in growth regressions. Using an extension of the classical errors in variables model, we construct a set of meta-estimates of the coefficient of years of schooling in an aggregate Cobb-Douglas production function. Our results suggest that, after correcting for measurement error bias, the value of this parameter is well above 0.50.
Resumo:
Estudi elaborat a partir d’una estada a l'Imperial College of London, Gran Bretanya, entre setembre i desembre 2006. Disposar d'una geometria bona i ben definida és essencial per a poder resoldre eficientment molts dels models computacionals i poder obtenir uns resultats comparables a la realitat del problema. La reconstrucció d'imatges mèdiques permet transformar les imatges obtingudes amb tècniques de captació a geometries en formats de dades numèriques . En aquest text s'explica de forma qualitativa les diverses etapes que formen el procés de reconstrucció d'imatges mèdiques fins a finalment obtenir una malla triangular per a poder‐la processar en els algoritmes de càlcul. Aquest procés s'inicia a l'escàner MRI de The Royal Brompton Hospital de Londres del que s'obtenen imatges per a després poder‐les processar amb les eines CONGEN10 i SURFGEN per a un entorn MATLAB. Aquestes eines les han desenvolupat investigadors del Bioflow group del departament d'enginyeria aeronàutica del Imperial College of London i en l'ultim apartat del text es comenta un exemple d'una artèria que entra com a imatge mèdica i surt com a malla triangular processable amb qualsevol programari o algoritme que treballi amb malles.
Resumo:
We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos
Resumo:
Photo-mosaicing techniques have become popular for seafloor mapping in various marine science applications. However, the common methods cannot accurately map regions with high relief and topographical variations. Ortho-mosaicing borrowed from photogrammetry is an alternative technique that enables taking into account the 3-D shape of the terrain. A serious bottleneck is the volume of elevation information that needs to be estimated from the video data, fused, and processed for the generation of a composite ortho-photo that covers a relatively large seafloor area. We present a framework that combines the advantages of dense depth-map and 3-D feature estimation techniques based on visual motion cues. The main goal is to identify and reconstruct certain key terrain feature points that adequately represent the surface with minimal complexity in the form of piecewise planar patches. The proposed implementation utilizes local depth maps for feature selection, while tracking over several views enables 3-D reconstruction by bundle adjustment. Experimental results with synthetic and real data validate the effectiveness of the proposed approach