793 resultados para WHIM DESCRIPTORS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present paper, we describe new robust methods of estimating cell shape and orientation in 3D from sections. The descriptors of 3D cell shape and orientation are based on volume tensors which are used to construct an ellipsoid, the Miles ellipsoid, approximating the average cell shape and orientation in 3D. The estimators of volume tensors are based on observations in several optical planes through sampled cells. This type of geometric sampling design is known as the optical rotator. The statistical behaviour of the estimator of the Miles ellipsoid is studied under a flexible model for 3D cell shape and orientation. In a simulation study, the lengths of the axes of the Miles ellipsoid can be estimated with CVs of about 2% if 100 cells are sampled. Finally, we illustrate the use of the developed methods in an example, involving neurons in the medial prefrontal cortex of rat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Intraspecific and interspecific architectural patterns were studied for eight tree species of a Bornean rain forest. Trees 5--19 m tall in two 4-ha permanent sample plots in primary forest were selected, and three light descriptors and seven architectural traits for each tree were measured. Two general predictions were made: (1) Slow growing individuals (or short ones) encounter lower light, and have flatter crowns, fewer leaf layers, and thinner stems, than do fast growing individuals (or tall ones). (2) Species with higher shade-tolerance receive less light and have flatter crowns, fewer leaf layers, and thinner stems, than do species with lower shade-tolerance. Shade-tolerance is assumed to decrease with maximum growth rate, mortality rate, and adult stature of a species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives. The central objective of this study was to systematically examine the internal structure of multihospital systems, determining the management principles used and the performance levels achieved in medical care and administrative areas.^ The Universe. The study universe consisted of short-term general American hospitals owned and operated by multihospital corporations. Corporations compared were the investor-owned (for-profit) and the voluntary multihospital systems. The individual hospital was the unit of analysis for the study.^ Theoretical Considerations. The contingency theory, using selected aspects of the classical and human relations schools of thought, seemed well suited to describe multihospital organization and was used in this research.^ The Study Hypotheses. The main null hypotheses generated were that there are no significant differences between the voluntary and the investor-owned multihospital sectors in their (1) hospital structures and (2) patient care and administrative performance levels.^ The Sample. A stratified random sample of 212 hospitals owned by multihospital systems was selected to equally represent the two study sectors. Of the sampled hospitals approached, 90.1% responded.^ The Analysis. Sixteen scales were constructed in conjunction with 16 structural variables developed from the major questions and sub-items of the questionnaire. This was followed by analysis of an additional 7 structural and 24 effectiveness (performance) measures, using frequency distributions. Finally, summary statistics and statistical testing for each variable and sub-items were completed and recorded in 38 tables.^ Study Findings. While it has been argued that there are great differences between the two sectors, this study found that with a few exceptions the null hypotheses of no difference in organizational and operational characteristics of non-profit and for-profit hospitals was accepted. However, there were several significant differences found in the structural variables: functional specialization, and autonomy were significantly higher in the voluntary sector. Only centralization was significantly different in the investor owned. Among the effectiveness measures, occupancy rate, cost of data processing, total manhours worked, F.T.E. ratios, and personnel per occupied bed were significantly higher in the voluntary sector. The findings indicated that both voluntary and for-profit systems were converging toward a common hierarchical corporate management approach. Factors of size and management style may be better descriptors to characterize a specific multihospital group than its profit or nonprofit status. (Abstract shortened with permission of author.) ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tumor necrosis factor (TNF)-Receptor Associated Factors (TRAFs) are a family of signal transducer proteins. TRAF6 is a unique member of this family in that it is involved in not only the TNF superfamily, but the toll-like receptor (TLR)/IL-1R (TIR) superfamily. The formation of the complex consisting of Receptor Activator of Nuclear Factor κ B (RANK), with its ligand (RANKL) results in the recruitment of TRAF6, which activates NF-κB, JNK and MAP kinase pathways. TRAF6 is critical in signaling with leading to release of various growth factors in bone, and promotes osteoclastogenesis. TRAF6 has also been implicated as an oncogene in lung cancer and as a target in multiple myeloma. In the hopes of developing small molecule inhibitors of the TRAF6-RANK interaction, multiple steps were carried out. Computational prediction of hot spot residues on the protein-protein interaction of TRAF6 and RANK were examined. Three methods were used: Robetta, KFC2, and HotPoint, each of which uses a different methodology to determine if a residue is a hot spot. These hot spot predictions were considered the basis for resolving the binding site for in silico high-throughput screening using GOLD and the MyriaScreen database of drug/lead-like compounds. Computationally intensive molecular dynamics simulations highlighted the binding mechanism and TRAF6 structural changes upon hit binding. Compounds identified as hits were verified using a GST-pull down assay, comparing inhibition to a RANK decoy peptide. Since many drugs fail due to lack of efficacy and toxicity, predictive models for the evaluation of the LD50 and bioavailability of our TRAF6 hits, and these models can be used towards other drugs and small molecule therapeutics as well. Datasets of compounds and their corresponding bioavailability and LD50 values were curated based, and QSAR models were built using molecular descriptors of these compounds using the k-nearest neighbor (k-NN) method, and quality of these models were cross-validated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El polvo de ajo (Allium sativum L.) es una alternativa para conservar en el tiempo sus propiedades sensoriales y prolongar su vida útil como alimento procesado. En la actualidad, no existe una definición clara de las propiedades sensoriales que caracterizan el ajo ni de las técnicas más adecuadas para su análisis. Los objetivos del presente trabajo fueron estudiar diferentes vehículos y determinar el más apropiado para el análisis sensorial del polvo de ajo, y generar y definir los descriptores para las propiedades sensoriales de olor y flavor de diferentes cultivares deshidratados a través de dos métodos: en estufa a 50°C y por liofilización a -50°C, bajo vacío. Se pretende contribuir a la caracterización de este producto aportando un vocabulario específico y sus definiciones, como así también una metodología sensorial propia. Ocho evaluadores, seleccionados y entrenados de acuerdo con las normas internacionales y con experiencia en análisis sensorial, probaron diferentes vehículos y una vez determinado el más adecuado, desarrollaron el lenguaje descriptivo para los ajos desecados y liofilizados seleccionando por consenso los descriptores que mejor caracterizaban las cultivares, y se definió cada término. Se generaron 31 descriptores simples. Si bien, algunos de los descriptores coincidieron con los publicados en la guía ASTM DS 66 (1996) para ajos frescos, con esta investigación se aportó un amplio número de términos nuevos para la descripción del olor y el flavor de los ajos desecados y liofilizados, los cuales contribuyen a una mejor caracterización sensorial de este producto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Las gráficas de etiquetas para envases de vino son diseñadas a partir de características sintácticas, semánticas y pragmáticas. Este trabajo tiene por objetivos realizar una recopilación, clasificación, análisis y jerarquización de los componentes de la configuración de etiquetas de vino, y además, proveer un instrumento que permita un análisis de estas piezas gráficas, de manera individual, con carácter de sistema o en comparación. Podrá ser aplicado por los profesionales que diseñan etiquetas y envases, por docentes en el campo del diseño y en guías para el público usuario. (…) “los diseñadores nos convertimos en el nexo simbólico entre la calidad de los productos elaborados por nuestros clientes y la calidad de vida de aquéllos que los disfrutan. Somos los que preparamos la logística simbólica que explicita estéticamente lo que un buen vino necesita para llamar la atención. Se trata de crear la imagen que represente fielmente al conjunto de atributos del vino, y de resaltar visualmente esta red de placeres que están presentes en él. Sólo así se logra una relación de confianza con la marca". (Santiago Zemma, 2005).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo surge en respuesta a la necesidad de que la revista Olivar cuente con un índice acumulativo que permita agrupar los artículos de los 16 números por tema, autor y fecha. En un primer momento se analizó el archivo que contiene los descriptores asignados a cada artículo por el área de procesos técnicos de la Facultad de Humanidades y Ciencias de la Educación (FaHCE) para evaluar su posible reutilización con este fin. Pero la falta de consistencia y normalización en la descripción hizo que se los desestimara y se procedió a indizar, nuevamente, los 283 registros que componen la colección de artículos de Olivar en el período 2001-2012, con un vocabulario de términos controlados que además dotara de especificidad la descripción. Los términos obtenidos durante este proceso se ordenaron alfabéticamente y pueden ser reutilizados por los autores que asignan las palabras claves a través de la plataforma OJS de reciente adquisición por parte de la biblioteca y para construir el mencionado índice acumulativo. Por último, se testearon algunos de los descriptores asignados en los motores de búsqueda de la Biblioteca de la Facultad de Humanidades y Ciencias de la Educación (BIBHUMA) y del portal Scielo con lo que se pudo corroborar la importancia de los procesos técnicos para dar visibilidad a las publicaciones facilitando el acceso a las mismas de una comunidad mayor de lectores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo surge en respuesta a la necesidad de que la revista Olivar cuente con un índice acumulativo que permita agrupar los artículos de los 16 números por tema, autor y fecha. En un primer momento se analizó el archivo que contiene los descriptores asignados a cada artículo por el área de procesos técnicos de la Facultad de Humanidades y Ciencias de la Educación (FaHCE) para evaluar su posible reutilización con este fin. Pero la falta de consistencia y normalización en la descripción hizo que se los desestimara y se procedió a indizar, nuevamente, los 283 registros que componen la colección de artículos de Olivar en el período 2001-2012, con un vocabulario de términos controlados que además dotara de especificidad la descripción. Los términos obtenidos durante este proceso se ordenaron alfabéticamente y pueden ser reutilizados por los autores que asignan las palabras claves a través de la plataforma OJS de reciente adquisición por parte de la biblioteca y para construir el mencionado índice acumulativo. Por último, se testearon algunos de los descriptores asignados en los motores de búsqueda de la Biblioteca de la Facultad de Humanidades y Ciencias de la Educación (BIBHUMA) y del portal Scielo con lo que se pudo corroborar la importancia de los procesos técnicos para dar visibilidad a las publicaciones facilitando el acceso a las mismas de una comunidad mayor de lectores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reconstruction of ocean history employs a large variety of methods with origins in the biological, chemical, and physical sciences, and uses modern statistical techniques for the interpretation of extensive and complex data sets. Various sediment properties deliver useful information for reconstructing environmental parameters. Those properties that have a close relationship to environmental parameters are called ''proxy variables'' (''proxies'' for short). Proxies are measurable descriptors for desired (but unobservable) variables. Surface water temperature is probably the most important parameter for describing the conditions of past oceans and is crucial for climate modelling. Proxies for temperature are: abundance of microfossils dwelling in surface waters, oxygen isotope composition of planktic foraminifers, the ratio of magnesium or strontium to calcium in calcareous shells or the ratio of certain organic molecules (e.g. alkenones produced by coccolithophorids). Surface water salinity, which is important in modelling of ocean circulation, is much more difficult to reconstruct. At present there is no established method for a direct determination of this parameter. Measurements associated with the paleochemistry of bottom waters to reconstruct bottom water age and flow are made on benthic foraminifers, ostracodes, and deep-sea corals. Important geochemical tracers are d13C and Cd/Ca ratios. When using benthic foraminifers, knowledge of the sediment depth habitat of species is crucial. Reconstructions of productivity patterns are of great interest because of important links to current patterns, mixing of water masses, wind, the global carbon cycle, and biogeography. Productivity is reflected in the flux of carbon into the sediment. There are a number of fluxes other than those of organic carbon that can be useful in assessing productivity fluctuations. Among others, carbonate and opal flux have been used, as well as particulate barite. Furthermore, microfossil assemblages contain clues to the intensity of production as some species occur preferentially in high-productivity regions while others avoid these. One marker for the fertility of sub-surface waters (that is, nutrient availability) is the carbon isotope ratio within that water (13C/12C, expressed as d13C). Carbon isotope ratios in today's ocean are negatively correlated with nitrate and phosphate contents. Another tracer of phosphate content in ocean waters is the Cd/Ca ratio. The correlation between this ratio and phosphate concentrations is quite well documented. A rather new development to obtain clues on ocean fertility (nitrate utilization) is the analysis of the 15N/14N ratio in organic matter. The fractionation dynamics are analogous to those of carbon isotopes. These various ratios are captured within the organisms growing within the tagged water. A number of reconstructions of the partial pressure of CO2 have been attempted using d13C differences between planktic and benthic foraminifers and d13C values of bulk organic material or individual organic components. To define the carbon system in sea water, two elements of the system have to be known in addition to temperature. These can be any combination of total CO2 , alkalinity, or pH. To reconstruct pH, the boron isotope composition of carbonates has been used. Ba patterns have been used to infer the distribution of alkalinity in past oceans. Information relating to atmospheric circulationand climate is transported to the ocean by wind or rivers, in the form of minerals or as plant andanimal remains. The most useful tracers in this respect are silt-sized particles and pollen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a methodology for reducing a straight line fitting regression problem to a Least Squares minimization one. This is accomplished through the definition of a measure on the data space that takes into account directional dependences of errors, and the use of polar descriptors for straight lines. This strategy improves the robustness by avoiding singularities and non-describable lines. The methodology is powerful enough to deal with non-normal bivariate heteroscedastic data error models, but can also supersede classical regression methods by making some particular assumptions. An implementation of the methodology for the normal bivariate case is developed and evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established