32 resultados para publicly verifiable


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decades, neuropsychological theories tend to consider cognitive functions as a result of the whole brainwork and not as individual local areas of its cortex. Studies based on neuroimaging techniques have increased in the last years, promoting an exponential growth of the body of knowledge about relations between cognitive functions and brain structures [1]. However, so fast evolution make complicated to integrate them in verifiable theories and, even more, translated in to cognitive rehabilitation. The aim of this research work is to develop a cognitive process-modeling tool. The purpose of this system is, in the first term, to represent multidimensional data, from structural and functional connectivity, neuroimaging, data from lesion studies and derived data from clinical intervention [2][3]. This will allow to identify consolidated knowledge, hypothesis, experimental designs, new data from ongoing studies and emerging results from clinical interventions. In the second term, we pursuit to use Artificial Intelligence to assist in decision making allowing to advance towards evidence based and personalized treatments in cognitive rehabilitation. This work presents the knowledge base design of the knowledge representation tool. It is compound of two different taxonomies (structure and function) and a set of tags linking both taxonomies at different levels of structural and functional organization. The remainder of the abstract is organized as follows: Section 2 presents the web application used for gathering necessary information for generating the knowledge base, Section 3 describes knowledge base structure and finally Section 4 expounds reached conclusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reproducibility of published results is a cornerstone in scientific publishing and progress. Therefore, the scientific community has been encouraging authors and editors to publish their contributions in a verifiable and understandable way. Efforts such as the Reproducibility Initiative [1], or the Reproducibility Projects on Biology [2] and Psychology [3] domains, have been defining standards and patterns to assess whether an experimental result is reproducible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background DCE@urLAB is a software application for analysis of dynamic contrast-enhanced magnetic resonance imaging data (DCE-MRI). The tool incorporates a friendly graphical user interface (GUI) to interactively select and analyze a region of interest (ROI) within the image set, taking into account the tissue concentration of the contrast agent (CA) and its effect on pixel intensity. Results Pixel-wise model-based quantitative parameters are estimated by fitting DCE-MRI data to several pharmacokinetic models using the Levenberg-Marquardt algorithm (LMA). DCE@urLAB also includes the semi-quantitative parametric and heuristic analysis approaches commonly used in practice. This software application has been programmed in the Interactive Data Language (IDL) and tested both with publicly available simulated data and preclinical studies from tumor-bearing mouse brains. Conclusions A user-friendly solution for applying pharmacokinetic and non-quantitative analysis DCE-MRI in preclinical studies has been implemented and tested. The proposed tool has been specially designed for easy selection of multi-pixel ROIs. A public release of DCE@urLAB, together with the open source code and sample datasets, is available at http://www.die.upm.es/im/archives/DCEurLAB/ webcite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El siglo XX ha sido el siglo de los desplazamientos. Una ingente cantidad de personas fueron forzadas por motivos políticos, o se vieron obligadas por motivos económicos, a abandonar sus territorios de origen, generando un distanciamiento que en la mayoría de los casos resultaría irrecuperable. Las penurias vividas en Europa, las oportunidades que se abrían en países americanos, o la presión ejercida por los regímenes totalitarios, llevaron a un buen número de profesionales, artistas e intelectuales europeos a territorio americano. El presente trabajo se propone indagar sobre una de esas migraciones que se establecieron en Latinoamérica en la primera mitad del siglo XX: la de los arquitectos españoles que se vieron forzados al exilio. Para ello se busca poner en evidencia no sólo sus aportaciones sino también la influencia que ejerció en su obra la cultura de los países de adopción. Venezuela recibió gran parte del contingente de arquitectos españoles desplazados como consecuencia de la Guerra Civil. Tras México fue el país que mayor número acogió. La llegada de dichos exiliados coincidió con el momento en que la sociedad venezolana, de base agrícola y comercial, pasaría a evidenciar el impacto de la revolución petrolera. Así pues, dicha llegada supuso no sólo la dramática pérdida del mundo previo, implícita en todo exilio, sino el arribo a una sociedad en profundo proceso de cambio. En ambos casos se trataba de “mundos que se desvanecen”. Se propone la asunción de la obra de dichos profesionales como una arquitectura desplazada. Un desplazamiento que se produce en dos sentidos: por un lado, se trata de un desplazamiento físico, por otro, la palabra desplazada habla también de la condición secundaria que adquiere la arquitectura ante el drama vital y de supervivencia que afectó a los exiliados. Así pues, a un desplazamiento físico, verificable, se une un desplazamiento en cuanto al nivel de importancia y de atención asignado a la arquitectura. El trabajo comprende una introducción, cuatro capítulos, y un epílogo, a modo de conclusión. A lo largo de dichos capítulos se conjugan el enfoque individual en la obra de uno de estos arquitectos desterrados, Rafael Bergamín, y la visión “coral” de diversas trayectorias vitales que enfrentaron un destino común. Se elige la figura de Bergamín como eje de desarrollo debido a la presencia significativa que tuvo su obra tanto en España como en Venezuela, y por la caracterización de la misma como obra construida en colaboración. La introducción, “Exilios arquitectónicos”, muestra el sustrato estructural, la fundamentación y la metodología empleada, incluyendo “problematizaciones” sobre el exilio arquitectónico. El primer capítulo, “Memoria de partida”, da cuenta de la formación y actuación, durante la preguerra, de los arquitectos españoles que saldrían al exilio. Se introduce un esquema de base generacional y se revela un panorama para nada unívoco. El segundo capítulo, “Guerra y salida al exilio”, aborda la actuación de dichos arquitectos durante la contienda bélica así como la posterior dispersión general del exilio. El tercer capítulo, “Construir desde lo que se desvanece. Arquitectos del exilio español en Venezuela”, propone diversos presupuestos conceptuales en torno al tema del desplazamiento en la arquitectura, revisando la adscripción disciplinar y profesional de los arquitectos españoles exiliados en Venezuela. El cuarto capítulo, “El regreso”, versa sobre el recorrido final de estos arquitectos. Como marco general, se revisa su itinerario de regreso o, en muchos casos, la imposibilidad de retorno. Por último, se dispone la fuente de los diversos documentos de archivos y repositorios, así como el aparato bibliográfico y referencial, empleados en la investigación. Tres anexos se adjuntan al corpus del trabajo. El primero presenta documentos inéditos, hallados durante el pertinente proceso de investigación; el segundo, dibujos de Bergamín, básicamente de las primeras décadas del siglo XX: caricaturas, anuncios y trabajos de la Escuela; el tercero, un esbozo biográfico de los arquitectos del exilio español. ABSTRACT The 20th Century has been the century of displacements. An enormous number of people were forced to leave their homelands for political or economic reasons, which generated a gap that in most of the cases would be unrecoverable. The hardships that people had to endure in Europe, the opportunities that emerged in American countries, or the pressure exerted by totalitarian regimes drove a good number of European professionals, artists and intellectuals to American territory. This research study is intended to investigate one of those migrations that settled in Latin America during the first half of the 20th century: the one of the Spanish architects that were forced into exile. To achieve this, an attempt was made to expose not only their contributions but also the influence that the culture of the countries which welcomed them exerted in their work. Venezuela received a large portion of Spanish architects who were displaced as a consequence of the Civil War. After Mexico, it was the country that sheltered the greatest number of persons. The arrival of these exiled Spanish architects coincided with the moment in which the Venezuelan society – based on agriculture and commerce – would witness the impact of the revolution of the oil industry. Thus, their arrival supposed not only experiencing the dramatic loss of their previous world – implicit in the notion of the exile – but also settling in a society going through a profound change process. In both cases it was about “two worlds that were vanishing.” The assumption of these architects’ work is regarded as a displaced architecture. A displacement that takes place in two ways: on one hand, there was a physical displacement, and on the other hand, the word displaced also talks about the second place that architecture is given when confronted with the urgent drama of survival that affected the exiled community. Hence, a physical, verifiable displacement is combined with a displacement that has to do with the importance and the attention given to architecture. This research study encompasses an introduction, four chapters, and an epilogue as a conclusion. Throughout the chapters, the individual approach to the work of one of these exiled architects, Rafael Bergamín, runs in parallel to an overall view of various other architects’ career paths that faced a common destiny. The work of the architect Bergamín was chosen as the center of this research study due to the significant presence that his work had in Spain as well as in Venezuela, and because its main characteristic was that it was built in collaboration with other architects. The introduction, “Architectural exiles”, shows the structural contextualization, the explanatory thesis statement and the methodology used, including “problematizations” about the architectural exile. The first chapter, “Memory of departure”, contains the academic background and performance during the pre-war time of the Spanish architects that would go into exile. An outline based on different generations and revealing an unambiguous perspective is introduced. The second chapter, “War and departure into exile”, tackles the performance of the Spanish architects during the war, as well as the following general diaspora into exile. The third chapter, “Building from what vanishes. Architects of the Spanish exile in Venezuela”, proposes various conceptual assumptions concerning the topic of displacement in architecture according to the doctrine and professional affiliations of the Spanish architects exiled in Venezuela. The fourth chapter, “The return”, deals with the end of these architects’ careers. As a general framework, their itineraries to return, or in many cases, the impossibility of returning, are reviewed. Finally, the sources to the various documents of files and repositories, as well as the bibliographical references consulted for the research, are provided. Three annexes have been attached to this research study. The first annex contains unpublished documents found during the research process; the second includes Bergamín’s drawings, basically from the first decades of the 20th century, such as, caricatures, advertisements and assignments done when he was a university student; and the third annex presents a biographical outline of the architects of the Spanish exile.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a theoretical model based primarily on transaction costs, for comparing the various tendering mechanisms used for transportation Public-Private Partnership (PPP) projects. In particular, the model contrasts negotiated procedures with the open procedure, as defined by the current European Union legislation on public tendering. The model includes both ex ante transaction costs (borne during the tendering stage) and ex post transaction costs (such as enforcement costs, re-negotiation costs, and costs arising from litigation between partners), explaining the trade-off between them. Generally speaking, it is assumed that the open procedure implies lower transaction costs ex ante, while the negotiated procedure reduces the probability of the appearance of new contingencies not foreseen in the contract, hence diminishing the expected value of transaction costs ex post. Therefore, the balance between ex ante and ex post transaction costs is the main criterion for deciding whether the open or negotiated procedure would be optimal. Notwithstanding, empirical evidence currently exists only on ex ante transaction costs in transportation infrastructure projects. This evidence has shown a relevant difference between the two procedures as far as ex ante costs are concerned, favouring the open procedure. The model developed in this paper also demonstrates that a larger degree of complexity in a contract does not unequivocally favour the use of a negotiated procedure. Only in those cases dealing with very innovative projects, where important dimensions of the quality of the asset or service are not verifiable, may we observe an advantage in favour of the negotiated procedure. The bottom line is that we find it difficult to justify the employment of negotiated procedures in most transportation PPP contracts, especially in the field of roads. Nevertheless, the field remains open for future empirical work and research on the levels of transaction costs borne ex post in PPP contracts, as well as on the probabilities of such costs appearing under any of the procurement procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Video-based vehicle detection is the focus of increasing interest due to its potential towards collision avoidance. In particular, vehicle verification is especially challenging due to the enormous variability of vehicles in size, color, pose, etc. In this paper, a new approach based on supervised learning using Principal Component Analysis (PCA) is proposed that addresses the main limitations of existing methods. Namely, in contrast to classical approaches which train a single classifier regardless of the relative position of the candidate (thus ignoring valuable pose information), a region-dependent analysis is performed by considering four different areas. In addition, a study on the evolution of the classification performance according to the dimensionality of the principal subspace is carried out using PCA features within a SVM-based classification scheme. Indeed, the experiments performed on a publicly available database prove that PCA dimensionality requirements are region-dependent. Hence, in this work, the optimal configuration is adapted to each of them, rendering very good vehicle verification results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis estudia las similitudes y diferencias entre los flujos turbulentos de pared de tipo externo e interno, en régimen incompresible, y a números de Reynolds moderada¬mente altos. Para ello consideramos tanto simulaciones numéricas como experimentos de capas límites con gradiente de presiones nulo y de flujos de canal, ambos a números de Reynolds en el rango δ+ ~ 500 - 2000. Estos flujos de cortadura son objeto de numerosas investigaciones debido a la gran importancia que tienen tanto a nivel tecnológico como a nivel de física fundamental. No obstante, todavía existen muchos interrogantes sobre aspectos básicos tales como la universalidad de los perfiles medios y de fluctuación de las velocidades o de la presión, tanto en la zona cercana a la pared como en la zona logarítmica, el escalado y el efecto del número de Reynolds, o las diferencias entre los flujos internos y externos en la zona exterior. En éste estudio hemos utilizado simulaciones numéricas ya existentes de canales y capas límites a números de Reynolds δ+ ~ 2000 y δ+ ~ 700, respectivamente. Para poder comparar ambos flujos a igual número de Reynolds hemos realizado una nueva simulación directa de capa límite en el rango δ+ ~ 1000-2000. Los resultados de la misma son presentados y analizados en detalle. Los datos sin postprocesar y las estadísticas ya postprocesadas están públicamente disponibles en nuestro sitio web.162 El análisis de las estadísticas usando un único punto confirma la existencia de perfiles logarítmicos para las fluctuaciones de la velocidad transversal w'2+ y de la presión p'2+ en ambos tipos de flujos, pero no para la velocidad normal v'2+ o la velocidad longitudinal u'2+. Para aceptar o rechazar la existencia de un rango logarítmico en u'2+ se requieren números de Reynolds más altos que los considerados en éste trabajo. Una de las conse¬cuencias más importantes de poseer tales perfiles es que el valor máximo de la intensidad, que se alcanza cerca de la pared, depende explícitamente del número de Reynolds. Esto ha sido confirmado tras analizar un gran número de datos experimentales y numéricos, cor¬roborando que el máximo de u'2+, p/2+, y w'2+ aumenta proporcionalmente con el log(δ+). Por otro lado, éste máximo es más intenso en los flujos externos que en los internos. La máxima diferencia ocurre en torno a y/δ ~ 0.3-0.5, siendo esta altura prácticamente independiente del número de Reynolds considerado. Estas diferencias se originan como consecuencia del carácter intermitente de las capas límites, que es inexistente en los flujos internos. La estructura de las fluctuaciones de velocidad y de presión, junto con la de los esfuer¬zos de Reynolds, se han investigado por medio de correlaciones espaciales tridimensionales considerando dos puntos de medida. Hemos obtenido que el tamaño de las mismas es gen¬eralmente mayor en canales que en capas límites, especialmente en el caso de la correlación longitudinal Cuu en la dirección del flujo. Para esta correlación se demuestra que las es¬tructuras débilmente correladas presentan longitudes de hasta 0(75), en el caso de capas límites, y de hasta 0(185) en el caso de canales. Estas longitudes se obtienen respecti-vamente en la zona logarítmica y en la zona exterior. Las longitudes correspondientes en la dirección transversal son significativamente menores en ambos flujos, 0(5 — 25). La organización espacial de las correlaciones es compatible con la de una pareja de rollos casi paralelos con dimensiones que escalan en unidades exteriores. Esta organización se mantiene al menos hasta y ~ 0.65, altura a la cual las capas límites comienzan a organi¬zarse en rollos transversales. Este comportamiento es sin embargo más débil en canales, pudiéndose observar parcialmente a partir de y ~ 0.85. Para estudiar si estas estructuras están onduladas a lo largo de la dirección transver¬sal, hemos calculado las correlaciones condicionadas a eventos intensos de la velocidad transversal w'. Estas correlaciones revelan que la ondulación de la velocidad longitudinal aumenta conforme nos alejamos de la pared, sugiriendo que las estructuras están más alineadas en la zona cercana a la pared que en la zona lejana a ella. El por qué de esta ondulación se encuentra posiblemente en la configuración a lo largo de diagonales que presenta w'. Estas estructuras no sólo están onduladas, sino que también están inclinadas respecto a la pared con ángulos que dependen de la variable considerada, de la altura, y de el contorno de correlación seleccionado. Por encima de la zona tampón e independien¬temente del número de Reynolds y tipo de flujo, Cuu presenta una inclinación máxima de unos 10°, las correlaciones Cvv y Cm son esencialmente verticales, y Cww está inclinada a unos 35°. Summary This thesis studies the similitudes and differences between external and internal in¬compressible wall-bounded turbulent flows at moderately-high Reynolds numbers. We consider numerical and experimental zero-pressure-gradient boundary layers and chan¬nels in the range of δ+ ~ 500 — 2000. These shear flows are subjects of intensive research because of their technological importance and fundamental physical interest. However, there are still open questions regarding basic aspects such as the universality of the mean and fluctuating velocity and pressure profiles at the near-wall and logarithmic regions, their scaling and the effect of the Reynolds numbers, or the differences between internal and external flows at the outer layer, to name but a few. For this study, we made use of available direct numerical simulations of channel and boundary layers reaching δ+ ~ 2000 and δ+ ~ 700, respectively. To fill the gap in the Reynolds number, a new boundary layer simulation in the range δ+ ~ 1000-2000 is presented and discussed. The original raw data and the post-processed statistics are publicly available on our website.162 The analysis of the one-point statistic confirms the existence of logarithmic profiles for the spanwise w'2+ and pressure p'2+ fluctuations for both type of flows, but not for the wall-normal v'2+ or the streamwise u'2+ velocities. To accept or reject the existence of a logarithmic range in u'2+ requires higher Reynolds numbers than the ones considered in this work. An important consequence of having such profiles is that the maximum value of the intensities, reached near the wall, depends on the Reynolds number. This was confirmed after surveying a wide number of experimental and numerical datasets, corrob¬orating that the maximum of ul2+, p'2+, and w'2+ increases proportionally to log(δ+). On the other hand, that maximum is more intense in external flows than in internal ones, differing the most around y/δ ~ 0.3-0.5, and essentially independent of the Reynolds number. We discuss that those differences are originated as a consequence of the inter¬mittent character of boundary layers that is absent in internal flows. The structure of the velocity and pressure fluctuations, together with those of the Reynolds shear stress, were investigated using three-dimensional two-point spatial correlations. We find that the correlations extend over longer distances in channels than in boundary layers, especially in the case of the streamwise correlation Cuu in the flow direc-tion. For weakly correlated structures, the maximum streamwise length of Cuu is O(78) for boundary layers and O(188) for channels, attained at the logarithmic and outer regions respectively. The corresponding lengths for the transverse velocities and for the pressure are shorter, 0(8 — 28), and of the same order for both flows. The spatial organization of the velocity correlations is shown to be consistent with a pair of quasi-streamwise rollers that scales in outer units. That organization is observed until y ~ 0.68, from which boundary layers start to organize into spanwise rollers. This effect is weaker in channels, and it appears at y ~ 0.88. We present correlations conditioned to intense events of the transversal velocity, w', to study if these structures meander along the spanwise direction. The results indicate that the streamwise velocity streaks increase their meandering proportionally to the distance to the wall, suggesting that the structures are more aligned close to the wall than far from it. The reason behind this meandering is probably due to the characteristic organization along diagonals of w'. These structures not only meander along the spanwise direction, but they are also inclined to the wall at angles that depend on the distance from the wall, on the variable being considered, and on the correlation level used to define them. Above the buffer layer and independent of the Reynolds numbers and type of flow, the maximum inclination of Cuu is about 10°, Cvv and Cpp are roughly vertical, and Cww is inclined by 35°.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este proyecto está realizado bajo el punto de vista del director de estrategia de la compañía, y tomando como punto de partida la situación de la marca a fecha de la adquisición por parte del grupo Alhokair, Enero del 2014. El objetivo es presentar al nuevo propietario de Blanco la estrategia a seguir, para no perder la esencia de la marca y a su vez generar valor, tanto económico como corporativo. Para su desarrollo es necesario disponer de información del sector veraz y contrastable. Por ello el estudio se realizará en base a la documentación obtenida de entidades fiables (cifras de negocio, situación de los mercados correspondientes al sector de la moda, entre otros datos mostrados); como son ACOTEX (Asociación Empresarial del Comercio Textil y Complementos, representativa de los empresarios y autónomos del sector y fundada en 1977), el ICEX (entidad pública empresarial de ámbito nacional encargada de promover la internacionalización de las empresas españolas), el INE (Instituto Nacional de Estadística) y el BORME (Boletín Oficial del Registro Mercantil). También en base a potenciales publicaciones web del sector como Modaes.es2, y prensa económica; Expansión3, Cinco días4 y el Economista5. Además de estas publicaciones y organismos, existen una gran cantidad de especialistas del sector que escriben artículos en prensa y en blogs, y que realizan charlas en importantes escuelas de moda y eventos, que han sido de gran ayuda para poder posicionarme con ‘comodidad’ en el rol que he asumido. Toda esta información; cuantitativa y cualitativa, basada en cifras, datos, estadísticas, conocimientos, experiencia y opiniones, junto con los conocimientos adquiridos en el Máster de Consultoría de Gestión de Empresas, principalmente en el módulo de estrategia, pero apoyándome e hilando con en el resto de módulos, he realizado este análisis y plan de acción con el fin de hacer renacer a Blanco. ---ABSTRACT---This project is conducted by the view of the director of company strategy, and taking as starting point the status of the brand as of the acquisition by the Alhokair group in January 2014. Aims to present the new owner Blanco's strategy to continue to keep the essence of the brand and in turn create economic and corporate value. For its development is necessary to have accurate and verifiable information of the sector. Therefore, the study will be based on documentation obtained from reliable entities. All quantitative and qualitative information based on facts, figures, statistics, knowledge, experience and views, along with the knowledge acquired in the Máster de Consultoría de Gestión de Empresas, mainly in the form of strategy, but leaning and spinning with the other modules, I have performed this analysis and action plan in order to be reborn Blanco.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Clinical Trials (CTs) are essential for bridging the gap between experimental research on new drugs and their clinical application. Just like CTs for traditional drugs and biologics have helped accelerate the translation of biomedical findings into medical practice, CTs for nanodrugs and nanodevices could advance novel nanomaterials as agents for diagnosis and therapy. Although there is publicly available information about nanomedicine-related CTs, the online archiving of this information is carried out without adhering to criteria that discriminate between studies involving nanomaterials or nanotechnology-based processes (nano), and CTs that do not involve nanotechnology (non-nano). Finding out whether nanodrugs and nanodevices were involved in a study from CT summaries alone is a challenging task. At the time of writing, CTs archived in the well-known online registry ClinicalTrials.gov are not easily told apart as to whether they are nano or non-nano CTs-even when performed by domain experts, due to the lack of both a common definition for nanotechnology and of standards for reporting nanomedical experiments and results. METHODS: We propose a supervised learning approach for classifying CT summaries from ClinicalTrials.gov according to whether they fall into the nano or the non-nano categories. Our method involves several stages: i) extraction and manual annotation of CTs as nano vs. non-nano, ii) pre-processing and automatic classification, and iii) performance evaluation using several state-of-the-art classifiers under different transformations of the original dataset. RESULTS AND CONCLUSIONS: The performance of the best automated classifier closely matches that of experts (AUC over 0.95), suggesting that it is feasible to automatically detect the presence of nanotechnology products in CT summaries with a high degree of accuracy. This can significantly speed up the process of finding whether reports on ClinicalTrials.gov might be relevant to a particular nanoparticle or nanodevice, which is essential to discover any precedents for nanotoxicity events or advantages for targeted drug therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases. Given a subset of registered atlases into the target image for a particular Region of Interest (ROI), a statistical model of appearance and shape is computed for fusing the labels. Segmentations are obtained by minimizing an energy function associated with the proposed model, using a graph-cut technique. We test different label fusion methods on publicly available MR images of human brains.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The engineering of solar power applications, such as photovoltaic energy (PV) or thermal solar energy requires the knowledge of the solar resource available for the solar energy system. This solar resource is generally obtained from datasets, and is either measured by ground-stations, through the use of pyranometers, or by satellites. The solar irradiation data are generally not free, and their cost can be high, in particular if high temporal resolution is required, such as hourly data. In this work, we present an alternative method to provide free hourly global solar tilted irradiation data for the whole European territory through a web platform. The method that we have developed generates solar irradiation data from a combination of clear-sky simulations and weather conditions data. The results are publicly available for free through Soweda, a Web interface. To our knowledge, this is the first time that hourly solar irradiation data are made available online, in real-time, and for free, to the public. The accuracy of these data is not suitable for applications that require high data accuracy, but can be very useful for other applications that only require a rough estimate of solar irradiation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A gene expression atlas is an essential resource to quantify and understand the multiscale processes of embryogenesis in time and space. The automated reconstruction of a prototypic 4D atlas for vertebrate early embryos, using multicolor fluorescence in situ hybridization with nuclear counterstain, requires dedicated computational strategies. To this goal, we designed an original methodological framework implemented in a software tool called Match-IT. With only minimal human supervision, our system is able to gather gene expression patterns observed in different analyzed embryos with phenotypic variability and map them onto a series of common 3D templates over time, creating a 4D atlas. This framework was used to construct an atlas composed of 6 gene expression templates from a cohort of zebrafish early embryos spanning 6 developmental stages from 4 to 6.3 hpf (hours post fertilization). They included 53 specimens, 181,415 detected cell nuclei and the segmentation of 98 gene expression patterns observed in 3D for 9 different genes. In addition, an interactive visualization software, Atlas-IT, was developed to inspect, supervise and analyze the atlas. Match-IT and Atlas-IT, including user manuals, representative datasets and video tutorials, are publicly and freely available online. We also propose computational methods and tools for the quantitative assessment of the gene expression templates at the cellular scale, with the identification, visualization and analysis of coexpression patterns, synexpression groups and their dynamics through developmental stages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Low cost RGB-D cameras such as the Microsoft’s Kinect or the Asus’s Xtion Pro are completely changing the computer vision world, as they are being successfully used in several applications and research areas. Depth data are particularly attractive and suitable for applications based on moving objects detection through foreground/background segmentation approaches; the RGB-D applications proposed in literature employ, in general, state of the art foreground/background segmentation techniques based on the depth information without taking into account the color information. The novel approach that we propose is based on a combination of classifiers that allows improving background subtraction accuracy with respect to state of the art algorithms by jointly considering color and depth data. In particular, the combination of classifiers is based on a weighted average that allows to adaptively modifying the support of each classifier in the ensemble by considering foreground detections in the previous frames and the depth and color edges. In this way, it is possible to reduce false detections due to critical issues that can not be tackled by the individual classifiers such as: shadows and illumination changes, color and depth camouflage, moved background objects and noisy depth measurements. Moreover, we propose, for the best of the author’s knowledge, the first publicly available RGB-D benchmark dataset with hand-labeled ground truth of several challenging scenarios to test background/foreground segmentation algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mobile apps market is a tremendous success, with millions of apps downloaded and used every day by users spread all around the world. For apps’ developers, having their apps published on one of the major app stores (e.g. Google Play market) is just the beginning of the apps lifecycle. Indeed, in order to successfully compete with the other apps in the market, an app has to be updated frequently by adding new attractive features and by fixing existing bugs. Clearly, any developer interested in increasing the success of her app should try to implement features desired by the app’s users and to fix bugs affecting the user experience of many of them. A precious source of information to decide how to collect users’ opinions and wishes is represented by the reviews left by users on the store from which they downloaded the app. However, to exploit such information the app’s developer should manually read each user review and verify if it contains useful information (e.g. suggestions for new features). This is something not doable if the app receives hundreds of reviews per day, as happens for the very popular apps on the market. In this work, our aim is to provide support to mobile apps developers by proposing a novel approach exploiting data mining, natural language processing, machine learning, and clustering techniques in order to classify the user reviews on the basis of the information they contain (e.g. useless, suggestion for new features, bugs reporting). Such an approach has been empirically evaluated and made available in a web-­‐based tool publicly available to all apps’ developers. The achieved results showed that the developed tool: (i) is able to correctly categorise user reviews on the basis of their content (e.g. isolating those reporting bugs) with 78% of accuracy, (ii) produces clusters of reviews (e.g. groups together reviews indicating exactly the same bug to be fixed) that are meaningful from a developer’s point-­‐of-­‐view, and (iii) is considered useful by a software company working in the mobile apps’ development market.