955 resultados para network modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning the structure of a graphical model from data is a common task in a wide range of practical applications. In this paper, we focus on Gaussian Bayesian networks, i.e., on continuous data and directed acyclic graphs with a joint probability density of all variables given by a Gaussian. We propose to work in an equivalence class search space, specifically using the k-greedy equivalence search algorithm. This, combined with regularization techniques to guide the structure search, can learn sparse networks close to the one that generated the data. We provide results on some synthetic networks and on modeling the gene network of the two biological pathways regulating the biosynthesis of isoprenoids for the Arabidopsis thaliana plant

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing a herd localization system capable to operate unattended in communication-challenged areas arises from the necessity of improving current systems in terms of cost, autonomy or any other facilities that a certain target group (or overall users) may demand. A network architecture of herd localization is proposed with its corresponding hardware and a methodology to assess performance in different operating conditions. The system is designed taking into account an eventual environmental impact hence most nodes are simple, cheap and kinetically powered from animal movements-neither batteries nor sophisticated processor chips are needed. Other network elements integrating GPS and batteries operate with selectable duty cycles, thus reducing maintenance duties. Equipment has been tested on Scandinavian reindeer in Lapland and its element modeling is integrated into a simulator to analyze such localization network applicability for different use cases. Performance indicators (detection frequency, localization accuracy and delay) are fitted to assess the overall performance; system relative costs are enclosed also for a range of deployments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The understanding of the structure and dynamics of the intricate network of connections among people that consumes products through Internet appears as an extremely useful asset in order to study emergent properties related to social behavior. This knowledge could be useful, for example, to improve the performance of personal recommendation algorithms. In this contribution, we analyzed five-year records of movie-rating transactions provided by Netflix, a movie rental platform where users rate movies from an online catalog. This dataset can be studied as a bipartite user-item network whose structure evolves in time. Even though several topological properties from subsets of this bipartite network have been reported with a model that combines random and preferential attachment mechanisms [Beguerisse Díaz et al., 2010], there are still many aspects worth to be explored, as they are connected to relevant phenomena underlying the evolution of the network. In this work, we test the hypothesis that bursty human behavior is essential in order to describe how a bipartite user-item network evolves in time. To that end, we propose a novel model that combines, for user nodes, a network growth prescription based on a preferential attachment mechanism acting not only in the topological domain (i.e. based on node degrees) but also in time domain. In the case of items, the model mixes degree preferential attachment and random selection. With these ingredients, the model is not only able to reproduce the asymptotic degree distribution, but also shows an excellent agreement with the Netflix data in several time-dependent topological properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo de esta investigación es desarrollar una metodología para estimar los potenciales impactos económicos y de transporte generados por la aplicación de políticas en el sector transporte. Los departamentos de transporte y otras instituciones gubernamentales relacionadas se encuentran interesadas en estos análisis debido a que son presentados comúnmente de forma errónea por la insuficiencia de datos o por la falta de metodologías adecuadas. La presente investigación tiene por objeto llenar este vacío haciendo un análisis exhaustivo de las técnicas disponibles que coincidan con ese propósito. Se ha realizado un análisis que ha identificado las diferencias cuando son aplicados para la valoración de los beneficios para el usuario o para otros efectos como aspectos sociales. Como resultado de ello, esta investigación ofrece un enfoque integrado que incluye un modelo Input-Output de múltiples regiones basado en la utilidad aleatoria (RUBMRIO), y un modelo de red de transporte por carretera. Este modelo permite la reproducción con mayor detalle y realismo del transporte de mercancías que por medio de su estructura sectorial identifica los vínculos de las compras y ventas inter-industriales dentro de un país utilizando los servicios del transporte de mercancías. Por esta razón, el modelo integrado es aplicable a diversas políticas de transporte. En efecto, el enfoque se ha aplicado para estudiar los efectos macroeconómicos regionales de la implementación de dos políticas diferentes en el sistema de transporte de mercancías de España, tales como la tarificación basada en la distancia recorrida por vehículo-kilómetro (€/km) aplicada a los vehículos del transporte de mercancías, y para la introducción de vehículos más largos y pesados de mercancías en la red de carreteras de España. El enfoque metodológico se ha evaluado caso por caso teniendo en cuenta una selección de la red de carreteras que unen las capitales de las regiones españolas. También se ha tenido en cuenta una dimensión económica a través de una tabla Input-Output de múltiples regiones (MRIO) y la base de datos de conteo de tráfico existente para realizar la validación del modelo. El enfoque integrado reproduce las condiciones de comercio observadas entre las regiones usando el sistema de transporte de mercancías por carretera, y que permite por comparación con los escenarios de políticas, determinar las contribuciones a los cambios distributivos y generativos. Así pues, el análisis estima los impactos económicos en cualquier región considerando los cambios en el Producto Interno Bruto (PIB) y el empleo. El enfoque identifica los cambios en el sistema de transporte a través de todos los caminos de la red de transporte a través de las medidas de efectividad (MOEs). Los resultados presentados en esta investigación proporcionan evidencia sustancial de que en la evaluación de las políticas de transporte, es necesario establecer un vínculo entre la estructura económica de las regiones y de los servicios de transporte. Los análisis muestran que para la mayoría de las regiones del país, los cambios son evidentes para el PIB y el empleo, ya que el comercio se fomenta o se inhibe. El enfoque muestra cómo el tráfico se desvía en ambas políticas, y también determina detalles de las emisiones de contaminantes en los dos escenarios. Además, las políticas de fijación de precios o de regulación de los sistemas de transporte de mercancías por carretera dirigidas a los productores y consumidores en las regiones promoverán transformaciones regionales afectando todo el país, y esto conduce a conclusiones diferentes. Así mismo, este enfoque integrado podría ser útil para evaluar otras políticas y otros países en todo el mundo. The purpose of this research is to develop a methodological approach aimed at assessing the potential economic and transportation impacts of transport policies. Transportation departments and other related government parties are interested in such analysis because it is commonly misrepresented for the insufficiency of data and suitable methodologies available. This research is directed at filling this gap by making a comprehensive analysis of the available techniques that match with that purpose. The differences when they are applied for the valuation of user benefits or for other impacts as social matters have been identified. As a result, this research presents an integrated approach which includes both a random utility-based multiregional Input-Output model (RUBMRIO), and a road transport network model. This model accounts for freight transport with more detail and realism because its commodity-based structure traces the linkages of inter-industry purchases and sales that use freight services within a given country. For this reason, the integrated model is applicable to various transport policies. In fact, the approach is applied to study the regional macroeconomic effects of implementing two different policies in the freight transport system of Spain, such as a distance-based charge in vehicle-kilometer (€/km) for Heavy Goods Vehicles (HGVs), and the introduction of Longer and Heavier Vehicles (LHVs) in the road network of Spain. The methodological approach has been evaluated on a case by case basis considering a selected road network of highways linking the capitals of the Spanish regions. It has also considered an economic dimension through a Multiregional Input Output Table (MRIO) and the existing traffic count database used in the model validation. The integrated approach replicates observed conditions of trade among regions using road freight transport systems that determine contributions to distributional and generative changes by comparison with policy scenarios. Therefore, the model estimates economic impacts in any given area by considering changes in Gross Domestic Product (GDP), employment (jobs), and in the transportation system across all paths of the transport network considering Measures of effectiveness (MOEs). The results presented in this research provide substantive evidence that in the assessment of transport policies it is necessary to establish a link between the economic structure of regions and the transportation services. The analysis shows that for most regions in the country, GDP and employment changes are noticeable when trade is encouraged or discouraged. This approach shows how traffic is diverted in both policies, and also provides details of the pollutant emissions in both scenarios. Furthermore, policies, such as pricing or regulation of road freight transportation systems, directed to producers and consumers in regions will promote different regional transformations across the country, and this lead to different conclusions. In addition, this integrated approach could be useful to assess other policies and countries worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neutrophil gelatinase associated lipocalin (NGAL) protein is attracting a great interest because of its antibacterial properties played upon modulating iron content in competition against iron acquisition processes developed by pathogenic bacteria that bind selective ferric iron chelators (siderophores). Besides its known high affinity to enterobactin, the most important siderophore, it has been recently shown that NGAL is able to bind Fe(III) coordinated by catechols. The selective binding of Fe(III)-catechol ligands to NGAL is here studied by using iron coordination structures with one, two, and three catecholate ligands. By means of a computational approach that consists of B3LYP/6-311G(d,p) quantum calculations for geometries, electron properties and electrostatic potentials of ligands, protein–ligand flexible docking calculations, analyses of protein–ligand interfaces, and Poisson–Boltzmann electrostatic potentials for proteins, we study the binding of iron catecholate ligands to NGAL as a central member of the lipocalin family of proteins. This approach provides a modeling basis for exploring in silico the selective binding of iron catecholates ligands giving a detailed picture of their interactions in terms of electrostatic effects and a network of hydrogen bonds in the protein binding pocket.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By 2050 it is estimated that the number of worldwide Alzheimer?s disease (AD) patients will quadruple from the current number of 36 million people. To date, no single test, prior to postmortem examination, can confirm that a person suffers from AD. Therefore, there is a strong need for accurate and sensitive tools for the early diagnoses of AD. The complex etiology and multiple pathogenesis of AD call for a system-level understanding of the currently available biomarkers and the study of new biomarkers via network-based modeling of heterogeneous data types. In this review, we summarize recent research on the study of AD as a connectivity syndrome. We argue that a network-based approach in biomarker discovery will provide key insights to fully understand the network degeneration hypothesis (disease starts in specific network areas and progressively spreads to connected areas of the initial loci-networks) with a potential impact for early diagnosis and disease-modifying treatments. We introduce a new framework for the quantitative study of biomarkers that can help shorten the transition between academic research and clinical diagnosis in AD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A great challenge for future information technologies is building reliable systems on top of unreliable components. Parameters of modern and future technology devices are affected by severe levels of process variability and devices will degrade and even fail during the normal lifeDme of the chip due to aging mechanisms. These extreme levels of variability are caused by the high device miniaturizaDon and the random placement of individual atoms. Variability is considered a "red brick" by the InternaDonal Technology Roadmap for Semiconductors. The session is devoted to this topic presenDng research experiences from the Spanish Network on Variability called VARIABLES. In this session a talk entlited "Modeling sub-threshold slope and DIBL mismatch of sub-22nm FinFet" was presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the knowledge model of a distributed decision support system, that has been designed for the management of a national network in Ukraine. It shows how advanced Artificial Intelligence techniques (multiagent systems and knowledge modelling) have been applied to solve this real-world decision support problem: on the one hand its distributed nature, implied by different loci of decision-making at the network nodes, suggested to apply a multiagent solution; on the other, due to the complexity of problem-solving for local network administration, it was useful to apply knowledge modelling techniques, in order to structure the different knowledge types and reasoning processes involved. The paper sets out from a description of our particular management problem. Subsequently, our agent model is described, pointing out the local problem-solving and coordination knowledge models. Finally, the dynamics of the approach is illustrated by an example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment on introducing Longer and Heavier Vehicles (LHVs) on the road freight transport demand is performed in this paper by applying an integrated modeling approach composed of a Random Utility-Based Multiregional Input-Output model (RUBMRIO) and a road transport network model. The approach strongly supports the concept that changes in transport costs derived from the LHVs allowance as well as the economic structure of regions have both direct and indirect effects on the road freight transport system. In addition, we estimate the magnitude and extent of demand changes in the road freight transportation system by using the commodity-based structure of the approach to identify the effect on traffic flows and on pollutant emissions over the whole network of Spain by considering a sensitivity analysis of the main parameters which determine the share of Heavy-Goods Vehicles (HGVs) and LHVs. The results show that the introduction of LHVs will strengthen the competitiveness of the road haulage sector by reducing costs, emissions, and the total freight vehicles required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An innovative background modeling technique that is able to accurately segment foreground regions in RGB-D imagery (RGB plus depth) has been presented in this paper. The technique is based on a Bayesian framework that efficiently fuses different sources of information to segment the foreground. In particular, the final segmentation is obtained by considering a prediction of the foreground regions, carried out by a novel Bayesian Network with a depth-based dynamic model, and, by considering two independent depth and color-based mixture of Gaussians background models. The efficient Bayesian combination of all these data reduces the noise and uncertainties introduced by the color and depth features and the corresponding models. As a result, more compact segmentations, and refined foreground object silhouettes are obtained. Experimental results with different databases suggest that the proposed technique outperforms existing state-of-the-art algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At early stages in visual processing cells respond to local stimuli with specific features such as orientation and spatial frequency. Although the receptive fields of these cells have been thought to be local and independent, recent physiological and psychophysical evidence has accumulated, indicating that the cells participate in a rich network of local connections. Thus, these local processing units can integrate information over much larger parts of the visual field; the pattern of their response to a stimulus apparently depends on the context presented. To explore the pattern of lateral interactions in human visual cortex under different context conditions we used a novel chain lateral masking detection paradigm, in which human observers performed a detection task in the presence of different length chains of high-contrast-flanked Gabor signals. The results indicated a nonmonotonic relation of the detection threshold with the number of flankers. Remote flankers had a stronger effect on target detection when the space between them was filled with other flankers, indicating that the detection threshold is caused by dynamics of large neuronal populations in the neocortex, with a major interplay between excitation and inhibition. We considered a model of the primary visual cortex as a network consisting of excitatory and inhibitory cell populations, with both short- and long-range interactions. The model exhibited a behavior similar to the experimental results throughout a range of parameters. Experimental and modeling results indicated that long-range connections play an important role in visual perception, possibly mediating the effects of context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although much of the brain’s functional organization is genetically predetermined, it appears that some noninnate functions can come to depend on dedicated and segregated neural tissue. In this paper, we describe a series of experiments that have investigated the neural development and organization of one such noninnate function: letter recognition. Functional neuroimaging demonstrates that letter and digit recognition depend on different neural substrates in some literate adults. How could the processing of two stimulus categories that are distinguished solely by cultural conventions become segregated in the brain? One possibility is that correlation-based learning in the brain leads to a spatial organization in cortex that reflects the temporal and spatial clustering of letters with letters in the environment. Simulations confirm that environmental co-occurrence does indeed lead to spatial localization in a neural network that uses correlation-based learning. Furthermore, behavioral studies confirm one critical prediction of this co-occurrence hypothesis, namely, that subjects exposed to a visual environment in which letters and digits occur together rather than separately (postal workers who process letters and digits together in Canadian postal codes) do indeed show less behavioral evidence for segregated letter and digit processing.