988 resultados para semi-parametric estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background For analyzing longitudinal familial data we adopted a log-linear form to incorporate heterogeneity in genetic variance components over the time, and additionally a serial correlation term in the genetic effects at different levels of ages. Due to the availability of multiple measures on the same individual, we permitted environmental correlations that may change across time. Results Systolic blood pressure from family members from the first and second cohort was used in the current analysis. Measures of subjects receiving hypertension treatment were set as censored values and they were corrected. An initial check of the variance and covariance functions proposed for analyzing longitudinal familial data, using empirical semi-variogram plots, indicated that the observed trait dispersion pattern follows the assumptions adopted. Conclusion The corrections for censored phenotypes based on ordinary linear models may be an appropriate simple model to correct the data, ensuring that the original variability in the data was retained. In addition, empirical semi-variogram plots are useful for diagnosis of the (co)variance model adopted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The biogenic production of NO in the soil accounts for between 10% and 40% of the global total. A large degree of the uncertainty in the estimation of the biogenic emissions stems from a shortage of measurements in arid regions, which comprise 40% of the earth’s land surface area. This study examined the emission of NO from three ecosystems in southern Africa which cover an aridity gradient from semi-arid savannas in South Africa to the hyper-arid Namib Desert in Namibia. A laboratory method was used to determine the release of NO as a function of the soil moisture and the soil temperature. Various methods were used to up-scale the net potential NO emissions determined in the laboratory to the vegetation patch, landscape or regional level. The importance of landscape, vegetation and climatic characteristics is emphasized. The first study occurred in a semi-arid savanna region in South Africa, where soils were sampled from 4 landscape positions in the Kruger National Park. The maximum NO emission occurred at soil moisture contents of 10%-20% water filled pore space (WFPS). The highest net potential NO emissions came from the low lying landscape positions, which have the largest nitrogen (N) stocks and the largest input of N. Net potential NO fluxes obtained in the laboratory were converted in field fluxes for the period 2003-2005, for the four landscape positions, using soil moisture and temperature data obtained in situ at the Kruger National Park Flux Tower Site. The NO emissions ranged from 1.5-8.5 kg ha-1 a-1. The field fluxes were up-scaled to a regional basis using geographic information system (GIS) based techniques, this indicated that the highest NO emissions occurred from the Midslope positions due to their large geographical extent in the research area. Total emissions ranged from 20x103 kg in 2004 to 34x103 kg in 2003 for the 56000 ha Skukuza land type. The second study occurred in an arid savanna ecosystem in the Kalahari, Botswana. In this study I collected soils from four differing vegetation patch types including: Pan, Annual Grassland, Perennial Grassland and Bush Encroached patches. The maximum net potential NO fluxes ranged from 0.27 ng m-2 s-1 in the Pan patches to 2.95 ng m-2 s-1 in the Perennial Grassland patches. The net potential NO emissions were up-scaled for the year December 2005-November 2006. This was done using 1) the net potential NO emissions determined in the laboratory, 2) the vegetation patch distribution obtained from LANDSAT NDVI measurements 3) estimated soil moisture contents obtained from ENVISAT ASAR measurements and 4) soil surface temperature measurements using MODIS 8 day land surface temperature measurements. This up-scaling procedure gave NO fluxes which ranged from 1.8 g ha-1 month-1 in the winter months (June and July) to 323 g ha-1 month-1 in the summer months (January-March). Differences occurred between the vegetation patches where the highest NO fluxes occurred in the Perennial Grassland patches and the lowest in the Pan patches. Over the course of the year the mean up-scaled NO emission for the studied region was 0.54 kg ha-1 a-1 and accounts for a loss of approximately 7.4% of the estimated N input to the region. The third study occurred in the hyper-arid Namib Desert in Namibia. Soils were sampled from three ecosystems; Dunes, Gravel Plains and the Riparian zone of the Kuiseb River. The net potential NO flux measured in the laboratory was used to estimate the NO flux for the Namib Desert for 2006 using modelled soil moisture and temperature data from the European Centre for Medium Range Weather Forecasts (ECMWF) operational model on a 36km x 35km spatial resolution. The maximum net potential NO production occurred at low soil moisture contents (<10%WFPS) and the optimal temperature was 25°C in the Dune and Riparian ecosystems and 35°C in the Gravel Plain Ecosystems. The maximum net potential NO fluxes ranged from 3.0 ng m-2 s-1 in the Riparian ecosystem to 6.2 ng m-2 s-1 in the Gravel Plains ecosystem. Up-scaling the net potential NO flux gave NO fluxes of up to 0.062 kg ha-1 a-1 in the Dune ecosystem and 0.544 kg h-1 a-1 in the Gravel Plain ecosystem. From these studies it is shown that NO is emitted ubiquitously from terrestrial ecosystems, as such the NO emission potential from deserts and scrublands should be taken into account in the global NO models. The emission of NO is influenced by various factors such as landscape, vegetation and climate. This study looks at the potential emissions from certain arid and semi-arid environments in southern Africa and other parts of the world and discusses some of the important factors controlling the emission of NO from the soil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wir betrachten einen zeitlich inhomogenen Diffusionsprozess, der durch eine stochastische Differentialgleichung gegeben wird, deren Driftterm ein deterministisches T-periodisches Signal beinhaltet, dessen Periodizität bekannt ist. Dieses Signal sei in einem Besovraum enthalten. Wir schätzen es mit Hilfe eines nichtparametrischen Waveletschätzers. Unser Schätzer ist von einem Wavelet-Dichteschätzer mit Thresholding inspiriert, der 1996 in einem klassischen iid-Modell von Donoho, Johnstone, Kerkyacharian und Picard konstruiert wurde. Unter gewissen Ergodizitätsvoraussetzungen an den Prozess können wir nichtparametrische Konvergenzraten angegeben, die bis auf einen logarithmischen Term den Raten im klassischen iid-Fall entsprechen. Diese Raten werden mit Hilfe von Orakel-Ungleichungen gezeigt, die auf Ergebnissen über Markovketten in diskreter Zeit von Clémencon, 2001, beruhen. Außerdem betrachten wir einen technisch einfacheren Spezialfall und zeigen einige Computersimulationen dieses Schätzers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, researchers in the health and social sciences have become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of an exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Natural direct and indirect effects are of particular interest as they generally combine to produce the total effect of the exposure and therefore provide insight on the mechanism by which it operates to produce the outcome. A semiparametric theory has recently been proposed to make inferences about marginal mean natural direct and indirect effects in observational studies (Tchetgen Tchetgen and Shpitser, 2011), which delivers multiply robust locally efficient estimators of the marginal direct and indirect effects, and thus generalizes previous results for total effects to the mediation setting. In this paper we extend the new theory to handle a setting in which a parametric model for the natural direct (indirect) effect within levels of pre-exposure variables is specified and the model for the observed data likelihood is otherwise unrestricted. We show that estimation is generally not feasible in this model because of the curse of dimensionality associated with the required estimation of auxiliary conditional densities or expectations, given high-dimensional covariates. We thus consider multiply robust estimation and propose a more general model which assumes a subset but not all of several working models holds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extreme runup is a key parameter for a shore risk analysis in which the accurate and quantitative estimation of the upper limit reached by waves is essential. Runup can be better approximated by splitting the setup and swash semi-amplitude contributions. In an experimental study recording setup becomes difficult due to infragravity motions within the surf zone, hence, it would be desirable to measure the setup with available methodologies and devices. In this research, an analysis is made of evaluated the convenience of direct estimation setup as the medium level in the swash zone for experimental runup analysis through a physical model. A physical mobile bed model was setup in a wave flume at the Laboratory for Maritime Experimentation of CEDEX. The wave flume is 36 metres long, 6.5 metres wide and 1.3 metres high. The physical model was designed to cover a reasonable range of parameters, three different slopes (1/50, 1/30 and 1/20), two sand grain sizes (D50 = 0.12 mm and 0.70 mm) and a range for the Iribarren number in deep water (ξ0) from 0.1 to 0.6. Best formulations were chosen for estimating a theoretical setup in the physical model application. Once theoretical setup had been obtained, a comparison was made with an estimation of the setup directly as a medium level of the oscillation in swash usually considered in extreme runup analyses. A good correlation was noted between both theoretical and time-averaging setup and a relation is proposed. Extreme runup is analysed through the sum of setup and semi-amplitude of swash. An equation is proposed that could be applied in strong foreshore slope-dependent reflective beaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A comprehensive assessment of nitrogen (N) flows at the landscape scale is fundamental to understand spatial interactions in the N cascade and to inform the development of locally optimised N management strategies. To explore these interactions, complete N budgets were estimated for two contrasting hydrological catchments (dominated by agricultural grassland vs. semi-natural peat-dominated moorland), forming part of an intensively studied landscape in southern Scotland. Local scale atmospheric dispersion modelling and detailed farm and field inventories provided high resolution estimations of input fluxes. Direct agricultural inputs (i.e. grazing excreta, N2 fixation, organic and synthetic fertiliser) accounted for most of the catchment N inputs, representing 82% in the grassland and 62% in the moorland catchment, while atmospheric deposition made a significant contribution, particularly in the moorland catchment, contributing 38% of the N inputs. The estimated catchment N budgets highlighted areas of key uncertainty, particularly N2 exchange and stream N export. The resulting N balances suggest that the study catchments have a limited capacity to store N within soils, vegetation and groundwater. The "catchment N retention", i.e. the amount of N which is either stored within the catchment or lost through atmospheric emissions, was estimated to be 13% of the net anthropogenic input in the moorland and 61% in the grassland catchment. These values contrast with regional scale estimates: Catchment retentions of net anthropogenic input estimated within Europe at the regional scale range from 50% to 90%, with an average of 82% (Billen et al., 2011). This study emphasises the need for detailed budget analyses to identify the N status of European landscapes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names.We sought to automatically classify digitally reconstructed interneuronal morphologies according tothis scheme. Simultaneously, we sought to discover possible subtypes of these types that might emergeduring automatic classification (clustering). We also investigated which morphometric properties weremost relevant for this classification.Materials and methods: A set of 118 digitally reconstructed interneuronal morphologies classified into thecommon basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of theworld?s leading neuroscientists, quantified by five simple morphometric properties of the axon and fourof the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. Wethen removed this class information for each type separately, and applied semi-supervised clustering tothose cells (keeping the others? cluster membership fixed), to assess separation from other types and lookfor the formation of new groups (subtypes). We performed this same experiment unlabeling the cells oftwo types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixtureof Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performedthe described experiments on three different subsets of the data, formed according to how many expertsagreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least26 (47 neurons).Results: Interneurons with more reliable type labels were classified more accurately. We classified HTcells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy,respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, andno subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette widthand ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively,confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a singletype also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometricproperties were more relevant that dendritic ones, with the axonal polar histogram length in the [pi, 2pi) angle interval being particularly useful.Conclusions: The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heteroge-neous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones fordistinguishing among the CB, HT, LB, and MA interneuron types.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente trabajo de investigación se ocupa del estudio de las vibraciones verticales inducidas por vórtices (VIV) en aquellos puentes que, por sus características geométricas y propiedades dinámicas, muestran cierta sensibilidad este tipo de fenómeno aeroelástico. El objeto principal es el análisis del mecanismo de interacción viento-estructura sobre secciones no fuseladas de geometría simple, con objeto de realizar una adecuada caracterización del problema y poder abordar posteriormente el análisis de otras secciones de geometría más compleja, representativas de los principales elementos estructurales de los puentes, como arcos, tableros, torres y pilas. Este aspecto es fundamental durante la fase de diseño del puente, donde deberán tenerse en cuenta también una serie de detalles que pueden influir significativamente su sensibilidad ante problemas aerodinámicos, como la morfología y dimensiones principales de la sección transversal del tablero, la disposición de barreras de seguridad y barreras cortaviento, o las riostras que unen diferentes elementos estructurales. La configuración de dos elementos en tándem o la construcción de un puente en las inmediaciones de otro existente son otros aspectos a considerar respecto a la sensibilidad frente a efectos aeroelásticos. El estudio se ha llevado a cabo principalmente mediante la implementación de simulaciones numéricas que reproducen la interacción entre la corriente de aire y secciones representativas de modelos estructurales, a partir de un código CFD basado en el método de las partículas de vórtices (VPM), siguiendo por tanto un esquema Lagrangiano. Los resultados han sido validados con datos experimentales existentes, valores procedentes de ensayos en túnel de viento y registros reales a partir de diferentes casos de estudio: Alconétar (2006), Niterói (1980), Trans- Tokyo Bay (1995) y Volgogrado (2010). Finalmente, se propone un modelo semi-empírico para la estimación del rango de velocidades críticas y amplitudes de oscilación basado en la utilización de las derivadas de flameo de Scanlan, y la densidad espectral de las fuerzas aerodinámicas en el dominio de la frecuencia. The present research work concerns the study of vertical vortex-induced vibrations (VIV) in bridges which show certain sensitivity to this type of aeroelastic phenomenon. It focuses on the analysis of the wind-structure interaction mechanism on bluff sections, with the objective of making a good characterisation of the problem and subsequently addressing the analysis of sections with a complex geometry, which are representative of the bridge structural elements, such as arches, decks, towers and piers. This issue is of relative importance during the bridge design phase, since minor details of the aforementioned elements can significantly influence its sensitivity to aerodynamic problems. The shape and main dimensions of the deck cross section, the addition of safety barriers and windshields, the presence of braces to enhance the structure mechanical properties, the utilisation of cross sections in tandem arrangement, or the erection of a new bridge in the vicinity of another existing one are some of the aspects to be considered regarding the sensitivity to the aeroelastic effects. The study has been carried out mainly through the implementation of numerical simulations that reproduces the interaction between the airflow and the representative cross section of a structural bridge model, by the use of a CFD code based on the vortex particle method (VPM), thus following a Lagrangian scheme. The results have been validated with existing experimental data, values from wind tunnel tests and full scale observations from the different case studies: Alconétar (2006), Niterói (1980), Trans-Tokyo Bay (1995) and Volgograd (2010). Finally, a new semi-empirical model is proposed for the estimation of the critical wind velocity ranges and oscillation amplitudes based on the use of the Scanlan’s flutter derivatives and the power spectral density of aerodynamic force time history in the frequency domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rock mass characterization requires a deep geometric understanding of the discontinuity sets affecting rock exposures. Recent advances in Light Detection and Ranging (LiDAR) instrumentation currently allow quick and accurate 3D data acquisition, yielding on the development of new methodologies for the automatic characterization of rock mass discontinuities. This paper presents a methodology for the identification and analysis of flat surfaces outcropping in a rocky slope using the 3D data obtained with LiDAR. This method identifies and defines the algebraic equations of the different planes of the rock slope surface by applying an analysis based on a neighbouring points coplanarity test, finding principal orientations by Kernel Density Estimation and identifying clusters by the Density-Based Scan Algorithm with Noise. Different sources of information —synthetic and 3D scanned data— were employed, performing a complete sensitivity analysis of the parameters in order to identify the optimal value of the variables of the proposed method. In addition, raw source files and obtained results are freely provided in order to allow to a more straightforward method comparison aiming to a more reproducible research.