936 resultados para Natural Catastrophe, Property Insurance, Loss Distribution, Truncated Data, Ruin Probability
Use of NeuroEyeCoach™ to Improve Eye Movement Efficacy in Patients with Homonymous Visual Field Loss
Resumo:
Acknowledgements: We would like to thank Sigrid Kenkel, Susanne Muller, Valentina Varalta, Cristina Fonte, Venecia Alb and Cristina Racasan who have contributed to data collection. Declaration of Interest: AS is Chief Science Officer of NovaVision Inc. NS has no conflict of interest. JZ is a member of the Scientific Advisory Board of NovaVision Inc. This study was supported by a NovaVision Inc. research grant to AS.
Resumo:
This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.
Resumo:
In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.
Resumo:
Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.
For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.
Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.
Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.
In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.
For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.
Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Abstract
Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.
The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.
The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.
The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.
Resumo:
Eolian dust is a significant source of iron and other nutrients that are essential for the health of marine ecosystems and potentially a controlling factor of the high nutrient-low chlorophyll status of the Subarctic North Pacific. We map the spatial distribution of dust input using three different geochemical tracers of eolian dust, 4He, 232Th and rare earth elements, in combination with grain size distribution data, from a set of core-top sediments covering the entire Subarctic North Pacific. Using the suite of geochemical proxies to fingerprint different lithogenic components, we deconvolve eolian dust input from other lithogenic inputs such as volcanic ash, ice-rafted debris, riverine and hemipelagic input. While the open ocean sites far away from the volcanic arcs are dominantly composed of pure eolian dust, lithogenic components other than eolian dust play a more crucial role along the arcs. In sites dominated by dust, eolian dust input appears to be characterized by a nearly uniform grain size mode at ~4 µm. Applying the 230Th-normalization technique, our proxies yield a consistent pattern of uniform dust fluxes of 1-2 g/m**2/yr across the Subarctic North Pacific. Elevated eolian dust fluxes of 2-4 g/m**2/yr characterize the westernmost region off Japan and the southern Kurile Islands south of 45° N and west of 165° E along the main pathway of the westerly winds. The core-top based dust flux reconstruction is consistent with recent estimates based on dissolved thorium isotope concentrations in seawater from the Subarctic North Pacific. The dust flux pattern compares well with state-of-the-art dust model predictions in the western and central Subarctic North Pacific, but we find that dust fluxes are higher than modeled fluxes by 0.5-1 g/m**2/yr in the northwest, northeast and eastern Subarctic North Pacific. Our results provide an important benchmark for biogeochemical models and a robust approach for downcore studies testing dust-induced iron fertilization of past changes in biological productivity in the Subarctic North Pacific.
Resumo:
Over 150 million cubic meter of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach sized-sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.
Resumo:
1. Desmoscolecida from the continental slope and the deep-sea bottom (59-4354 m) off the Portuguese and Moroccan coasts are described. 18 species were identified: Desmoscolex bathyalis sp. nov., D. chaetalatus sp. nov., D. eftus sp. nov., D. galeatus sp. nov., D. lapilliferus sp. nov., D. longisetosus Timm, 1970, D. lorenzeni sp. nov., D. perspicuus sp. nov., D. pustulatus sp. nov., Quadricoma angulocephala sp. nov., Q. brevichaeta sp. nov., Q. iberica sp. nov., Q. loricatoides sp. nov., Tricoma atlantica sp. nov., T. bathycola sp. nov., T. beata sp. nov., T. incomposita sp. nov., T. meteora sp. nov., T. mauretania sp. nov. 2. The following new terms are proposed: "Desmos" (ring-shaped concretions consisting of secretion and concretion particles), "desmoscolecoid" and "tricomoid" arrangement of the somatic setae, "regelmaessige" (regular), "unregelmaessige" (irregular), "vollstaendige" (complete) and "unvollstaendige" (incomplete) arrangement of somatic seta (variations in the desmoscolecoid arrangement of the somatic setae). The length of the somatic setae is given in the setal pattern. 3. Desmoscolecida identical as to genus and species exhibit no morphological differences even if forthcoming from different bathymetrical zones (deep sea, sublitoral, litoral) or different environments (marin, freshwater, coastal subsoil water, terrestrial environment). 4. Lorenzen's (1969) contention that thearrangement of the somatic setae is more significant for the natural relationships between the different genera of Desmoscolecida than other characteristics is further confirmed. Species with tricomoid arrangement of somatic setae are regarded as primitive, species with desmoscolecoid arrangement of somatic setae are regarded as more advanced. 5. Three new genus are established: Desmogerlachia gen. nov., Desmolorenzenia gen. nov. and Desmofimmia gen. nov. - Protricoma Timm, 1970 is synonymized with Paratricoma Gerlach, 1964 and Protodesmoscolex Timm, 1970 is synonymized with Desmoscolex Claparede,1863. 6. Checklists of all species of the order Desmoscolecida and keys to species of the subfamilies Tricominae and Desmoscolecinae are provided. 7. The following nomenclatorial changes are suggested: Desmogerlachia papillifer (Gerlach, 1956) comb. nov., D .pratensis (Lorenz, 1969) comb. nov., Desmotimmia mirabilis (Timm, 1970) comb. nov., Paratricoma squamosa (Timm, 1970) comb. nov., Desmolorenzenia crassicauda (Timm, 1970) comb. nov., D. desmoscolecoides (Timm, 1970) comb. nov., D. eurycricus (Filipjev, 1922) comb. nov., D. frontalis (Gerlach, 1952) comb. nov., D. hupferi (Steiner, 1916) comb. nov., D. longicauda (Timm, 1970) comb. nov., D. parva (Timm, 1970) comb. nov., D. platycricus (Steiner, 1916) comb. nov., D. viffata (Lorenzen, 1969) comb. nov., Desmoscolex anfarcficos (Timm, 1970) comb. nov.
Resumo:
With the accumulation of anthropogenic carbon dioxide (CO2), a proceeding decline in seawater pH has been induced that is referred to as ocean acidification. The ocean's capacity for CO2 storage is strongly affected by biological processes, whose feedback potential is difficult to evaluate. The main source of CO2 in the ocean is the decomposition and subsequent respiration of organic molecules by heterotrophic bacteria. However, very little is known about potential effects of ocean acidification on bacterial degradation activity. This study reveals that the degradation of polysaccharides, a major component of marine organic matter, by bacterial extracellular enzymes was significantly accelerated during experimental simulation of ocean acidification. Results were obtained from pH perturbation experiments, where rates of extracellular alpha- and beta-glucosidase were measured and the loss of neutral and acidic sugars from phytoplankton-derived polysaccharides was determined. Our study suggests that a faster bacterial turnover of polysaccharides at lowered ocean pH has the potential to reduce carbon export and to enhance the respiratory CO2 production in the future ocean.
Resumo:
This chapter discusses the formation and distribution of some metals in ocean-floor manganese nodules in the light of the observed data in the literature and thermodynamic and kinetic considerations of the oxidation of metal ions in the oceanic environment. There are, in general, two major schools of thought on the mechanism of incorporation of the minor elements such as nickel, copper, and cobalt with the major elements such as manganese and iron. One is the lattice substitution mechanism and the other the adsorption mechanism. If the mechanism is lattice substitution, extraction of the metal ions is not possible unless the lattice of the major elements is first broken and exchanged with other ions from the bulk solution. Consequently, the leaching behavior of minor elements should display a very close relationship with that of major elements.
Resumo:
A preliminary set of heavy metal analyses from surface sediment samples covering the whole Adriatic Basin is presented, and their significance in terms of pollution is discussed. The core samples were analysed for Fe, Mn, Cr, Cu, Ni, Pb, Zn, P, organic carbon, Ca- and Mg-carbonate, and their mineralogical composition and grain size distribution were determined. All heavy metal concentrations found can be attributed to natural sedimentological processes and are not necessarily to be interpreted as indications of pollution.
Resumo:
The Arctic is responding more rapidly to global warming than most other areas on our planet. Northward flowing Atlantic Water is the major means of heat advection towards the Arctic and strongly affects the sea ice distribution. Records of its natural variability are critical for the understanding of feedback mechanisms and the future of the Arctic climate system, but continuous historical records reach back only ~150 years. Here, we present a multidecadal scale record of ocean temperature variations during the last 2000 years, derived from marine sediments off Western Svalbard (79°N). We find that early-21st-century temperatures of Atlantic Water entering the Arctic Ocean are unprecedented over the past 2000 years and are presumably linked to the Arctic Amplification of global warming.