837 resultados para ROBUST ESTIMATES


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity of the tropics to climate change, particularly the amplitude of glacial-to-interglacial changes in sea surface temperature (SST), is one of the great controversies in paleoclimatology. Here we reassess faunal estimates of ice age SSTs, focusing on the problem of no-analog planktonic foraminiferal assemblages in the equatorial oceans that confounds both classical transfer function and modern analog methods. A new calibration strategy developed here, which uses past variability of species to define robust faunal assemblages, solves the no-analog problem and reveals ice age cooling of 5° to 6°C in the equatorial current systems of the Atlantic and eastern Pacific Oceans. Classical transfer functions underestimated temperature changes in some areas of the tropical oceans because core-top assemblages misrepresented the ice age faunal assemblages. Our finding is consistent with some geochemical estimates and model predictions of greater ice age cooling in the tropics than was inferred by Climate: Long-Range Investigation, Mapping, and Prediction (CLIMAP) [1981] and thus may help to resolve a long-standing controversy. Our new foraminiferal transfer function suggests that such cooling was limited to the equatorial current systems, however, and supports CLIMAP's inference of stability of the subtropical gyre centers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust joint modelling is an emerging field of research. Through the advancements in electronic patient healthcare records, the popularly of joint modelling approaches has grown rapidly in recent years providing simultaneous analysis of longitudinal and survival data. This research advances previous work through the development of a novel robust joint modelling methodology for one of the most common types of standard joint models, that which links a linear mixed model with a Cox proportional hazards model. Through t-distributional assumptions, longitudinal outliers are accommodated with their detrimental impact being down weighed and thus providing more efficient and reliable estimates. The robust joint modelling technique and its major benefits are showcased through the analysis of Northern Irish end stage renal disease patients. With an ageing population and growing prevalence of chronic kidney disease within the United Kingdom, there is a pressing demand to investigate the detrimental relationship between the changing haemoglobin levels of haemodialysis patients and their survival. As outliers within the NI renal data were found to have significantly worse survival, identification of outlying individuals through robust joint modelling may aid nephrologists to improve patient's survival. A simulation study was also undertaken to explore the difference between robust and standard joint models in the presence of increasing proportions and extremity of longitudinal outliers. More efficient and reliable estimates were obtained by robust joint models with increasing contrast between the robust and standard joint models when a greater proportion of more extreme outliers are present. Through illustration of the gains in efficiency and reliability of parameters when outliers exist, the potential of robust joint modelling is evident. The research presented in this thesis highlights the benefits and stresses the need to utilise a more robust approach to joint modelling in the presence of longitudinal outliers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Super elastic nitinol (NiTi) wires were exploited as highly robust supports for three distinct crosslinked polymeric ionic liquid (PIL)-based coatings in solid-phase microextraction (SPME). The oxidation of NiTi wires in a boiling (30%w/w) H2O2 solution and subsequent derivatization in vinyltrimethoxysilane (VTMS) allowed for vinyl moieties to be appended to the surface of the support. UV-initiated on-fiber copolymerization of the vinyl-substituted NiTi support with monocationic ionic liquid (IL) monomers and dicationic IL crosslinkers produced a crosslinked PIL-based network that was covalently attached to the NiTi wire. This alteration alleviated receding of the coating from the support, which was observed for an analogous crosslinked PIL applied on unmodified NiTi wires. A series of demanding extraction conditions, including extreme pH, pre-exposure to pure organic solvents, and high temperatures, were applied to investigate the versatility and robustness of the fibers. Acceptable precision of the model analytes was obtained for all fibers under these conditions. Method validation by examining the relative recovery of a homologous group of phthalate esters (PAEs) was performed in drip-brewed coffee (maintained at 60 °C) by direct immersion SPME. Acceptable recoveries were obtained for most PAEs in the part-per-billion level, even in this exceedingly harsh and complex matrix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims to estimate an adult-equivalent scale for calorie requirements and to determine the differences between adult-equivalent and per capita measurements of calorie availability in the Brazilian population. The study used data from the 2002-2003 Brazilian Household Budget Survey. The calorie requirement for a reference adult individual was based on the mean requirements for adult males and females (2,550kcal/day). The conversion factors were defined as the ratios between the calorie requirements for each age group and gender and that of the reference adult. The adult-equivalent calorie availability levels were higher than the per capita levels, with the largest differences in rural and low-income households. Differences in household calorie availability varied from 22kcal/day (households with adults and an adolescent) to 428kcal/day (households with elderly individuals), thus showing that per capital measurements can underestimate the real calorie availability, since they overlook differences in household composition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo foi avaliar a acurácia, precisão e robustez das estimativas da digestibilidade aparente da matéria seca obtidas utilizando-se como indicadores fibra em detergente ácido indigestível (FDAi), fibra em detergente neutro (FDNi) indigestível, lignina em detergente ácido (LDA), LDA indigestível (LDAi) e óxido crômico em comparação ao método de coleta total de fezes. Dezoito ovinos (56,5 ± 4,6 kg PV) foram designados aleatoriamente a dietas compostas de 25, 50 ou 75% de concentrado e feno de Coast cross por 25 dias. As fezes foram coletadas por cinco dias para determinação da digestibilidade aparente da MS. As amostras de alimentos e fezes foram incubadas no rúmen de três bovinos por 144 horas, para obtenção das frações indigestíveis. Óxido crômico foi administrado a 4,0 g/animal/dia. A acurácia foi avaliada pela comparação do viés médio (DAMS predito - DAMS observado) entre os indicadores; a precisão, por meio da raiz quadrada do erro de predição e do erro residual; e a robustez, pelo estudo da regressão entre o viés e o consumo de matéria seca, o nível de concentrado e o peso vivo. A recuperação fecal e a acurácia das estimativas da digestibilidade aparente da MS foram maiores para FDAi, seguida pela FDNi, LDAi, pelo óxido crômico e depois pela lignina em detergente ácido. O viés linear foi significativo apenas para FDAi, FDNi e LDAi. O uso de óxido crômico permitiu estimativas mais precisas da digestibilidade aparente da MS. Todos os indicadores foram robustos quanto à variação no consumo de matéria seca e apenas LDAi e óxido crômico foram robustos quanto aos níveis de concentrado na dieta. O óxido crômico não foi robusto quando houve variação no peso vivo animal. Assim, a FDAi é o indicador mais recomendado na estimativa da digestibilidade aparente da MS em ovinos quando o objetivo é comparar aos dados da literatura, enquanto o óxido crômico é mais recomendado quando o objetivo é comparar tratamentos dentro de um mesmo experimento.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A combination of trajectory sensitivity method and master-slave synchronization was proposed to parameter estimation of nonlinear systems. It was shown that master-slave coupling increases the robustness of the trajectory sensitivity algorithm with respect to the initial guess of parameters. Since synchronization is not a guarantee that the estimation process converges to the correct parameters, a conditional test that guarantees that the new combined methodology estimates the true values of parameters was proposed. This conditional test was successfully applied to Lorenz's and Chua's systems, and the proposed parameter estimation algorithm has shown to be very robust with respect to parameter initial guesses and measurement noise for these examples. Copyright (C) 2009 Elmer P. T. Cari et al.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present research was conducted to estimate the genetic trends for meat quality traits in a male broiler line. The traits analyzed were initial pH, pH at 6 h after slaughter, final pH, initial range of falling pH, final range of falling pH, lightness, redness, yellowness, weep loss, drip loss, shrink loss, and shear force. The number of observations varied between 618 and 2125 for each trait. Genetic values were obtained by restricted maximum likelihood, and the numerator relationship matrix had 107,154 animals. The genetic trends were estimated by regression of the broiler average genetic values with respect to unit of time (generations), and the average genetic trend was estimated by regression coefficients. Generally, for the traits analyzed, small genetic trends were obtained, except for drip loss and shear force, which were higher. The small magnitude of the trends found could be a consequence of the absence of selection for meat quality traits in the line analyzed. The estimates of genetic trends obtained were an indication of an improvement in the meat quality traits in the line analyzed, except for drip loss.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mature weight breeding values were estimated using a multi-trait animal model (MM) and a random regression animal model (RRM). Data consisted of 82 064 weight records from 8 145 animals, recorded from birth to eight years of age. Weights at standard ages were considered in the MM. All models included contemporary groups as fixed effects, and age of dam (linear and quadratic effects) and animal age as covariates. In the RRM, mean trends were modelled through a cubic regression on orthogonal polynomials of animal age and genetic maternal and direct and maternal permanent environmental effects were also included as random. Legendre polynomials of orders 4, 3, 6 and 3 were used for animal and maternal genetic and permanent environmental effects, respectively, considering five classes of residual variances. Mature weight (five years) direct heritability estimates were 0.35 (MM) and 0.38 (RRM). Rank correlation between sires' breeding values estimated by MM and RRM was 0.82. However, selecting the top 2% (12) or 10% (62) of the young sires based on the MM predicted breeding values, respectively 71% and 80% of the same sires would be selected if RRM estimates were used instead. The RRM modelled the changes in the (co) variances with age adequately and larger breeding value accuracies can be expected using this model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data from the slaughter of 24,001 chickens that were part of a selection program for the production of commercial broilers were used to estimate genetic trend for absolute carcass (CW), breast meat (BRW), and leg (LW) weights, and relative carcass (CY), breast meat (BRY), and leg (LY) weights. The components of (co) variance and breeding values of individuals were obtained by the restricted maximum likelihood method applied to animal models. The relationship matrix was composed of 132,442 birds. The models included as random effects, maternal additive genetic and permanent environmental for CW, BRW, LW, CY, and BRY, and only maternal permanent environmental for LY, besides the direct additive genetic and residual effects, and as fixed effects, hatch week, parents' mating group and sex. The estimates of genetic trend were obtained by average regression of breeding value on generation, and the average genetic trend was estimated by regression coefficients. The genetic trends for CW (+ 6.0336 g/generation), BRW (+ 3.6723 g/generation), LW (+ 1.5846 g/generation), CY (+ 0.1195%/generation), and BRY (+ 0.1388%/generation) were positive, and they were in accordance with the objectives of the selection program for these traits. The genetic trend for LY(-0.0019%/generation) was negative, possibly due to the strong emphasis on selection for BRY and the negative correlations between these two traits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on oxygen abundances determined from medium-resolution near-infrared spectroscopy for a sample of 57 carbon-enhanced metal-poor (CEMP) stars selected from the Hamburg/ESO Survey. The majority of our program stars exhibit oxygen-to-iron ratios in the range +0.5 < [O/Fe]< + 2.0. The [O/Fe] values for this sample are statistically compared to available high-resolution estimates for known CEMP stars as well as to high-resolution estimates for a set of carbon-normal metal-poor stars. Carbon, nitrogen, and oxygen abundance patterns for a sub-sample of these stars are compared to yield predictions for very metal-poor asymptotic giant branch (AGB) abundances in the recent literature. We find that the majority of our sample exhibit patterns that are consistent with previously studied CEMP stars having s-process-element enhancements and thus have very likely been polluted by carbon- and oxygen-enhanced material transferred from a metal-poor AGB companion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In-situ measurements in convective clouds (up to the freezing level) over the Amazon basin show that smoke from deforestation fires prevents clouds from precipitating until they acquire a vertical development of at least 4 km, compared to only 1-2 km in clean clouds. The average cloud depth required for the onset of warm rain increased by similar to 350 m for each additional 100 cloud condensation nuclei per cm(3) at a super-saturation of 0.5% (CCN0.5%). In polluted clouds, the diameter of modal liquid water content grows much slower with cloud depth (at least by a factor of similar to 2), due to the large number of droplets that compete for available water and to the suppressed coalescence processes. Contrary to what other studies have suggested, we did not observe this effect to reach saturation at 3000 or more accumulation mode particles per cm(3). The CCN0.5% concentration was found to be a very good predictor for the cloud depth required for the onset of warm precipitation and other microphysical factors, leaving only a secondary role for the updraft velocities in determining the cloud drop size distributions. The effective radius of the cloud droplets (r(e)) was found to be a quite robust parameter for a given environment and cloud depth, showing only a small effect of partial droplet evaporation from the cloud's mixing with its drier environment. This supports one of the basic assumptions of satellite analysis of cloud microphysical processes: the ability to look at different cloud top heights in the same region and regard their r(e) as if they had been measured inside one well developed cloud. The dependence of r(e) on the adiabatic fraction decreased higher in the clouds, especially for cleaner conditions, and disappeared at r(e)>=similar to 10 mu m. We propose that droplet coalescence, which is at its peak when warm rain is formed in the cloud at r(e)=similar to 10 mu m, continues to be significant during the cloud's mixing with the entrained air, cancelling out the decrease in r(e) due to evaporation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show a function that fits well the probability density of return times between two consecutive visits of a chaotic trajectory to finite size regions in phase space. It deviates from the exponential statistics by a small power-law term, a term that represents the deterministic manifestation of the dynamics. We also show how one can quickly and easily estimate the Kolmogorov-Sinai entropy and the short-term correlation function by realizing observations of high probable returns. Our analyses are performed numerically in the Henon map and experimentally in a Chua's circuit. Finally, we discuss how our approach can be used to treat the data coming from experimental complex systems and for technological applications. (C) 2009 American Institute of Physics. [doi: 10.1063/1.3263943]