941 resultados para Chebyshev And Binomial Distributions
Resumo:
In this thesis, a numerical program has been developed to simulate the wave-induced ship motions in the time domain. Wave-body interactions have been studied for various ships and floating bodies through forced motion and free motion simulations in a wide range of wave frequencies. A three-dimensional Rankine panel method is applied to solve the boundary value problem for the wave-body interactions. The velocity potentials and normal velocities on the boundaries are obtained in the time domain by solving the mixed boundary integral equations in relation to the source and dipole distributions. The hydrodynamic forces are calculated by the integration of the instantaneous hydrodynamic pressures over the body surface. The equations of ship motion are solved simultaneously with the boundary value problem for each time step. The wave elevation is computed by applying the linear free surface conditions. A numerical damping zone is adopted to absorb the outgoing waves in order to satisfy the radiation condition for the truncated free surface. A numerical filter is applied on the free surface for the smoothing of the wave elevation. Good convergence has been reached for both forced motion simulations and free motion simulations. The computed added-mass and damping coefficients, wave exciting forces, and motion responses for ships and floating bodies are in good agreement with the numerical results from other programs and experimental data.
Resumo:
The Standard Cosmological Model is generally accepted by the scientific community, there are still an amount of unresolved issues. From the observable characteristics of the structures in the Universe,it should be possible to impose constraints on the cosmological parameters. Cosmic Voids (CV) are a major component of the LSS and have been shown to possess great potential for constraining DE and testing theories of gravity. But a gap between CV observations and theory still persists. A theoretical model for void statistical distribution as a function of size exists (SvdW) However, the SvdW model has been unsuccesful in reproducing the results obtained from cosmological simulations. This undermines the possibility of using voids as cosmological probes. The goal of our thesis work is to cover the gap between theoretical predictions and measured distributions of cosmic voids. We develop an algorithm to identify voids in simulations,consistently with theory. We inspecting the possibilities offered by a recently proposed refinement of the SvdW (the Vdn model, Jennings et al., 2013). Comparing void catalogues to theory, we validate the Vdn model, finding that it is reliable over a large range of radii, at all the redshifts considered and for all the cosmological models inspected. We have then searched for a size function model for voids identified in a distribution of biased tracers. We find that, naively applying the same procedure used for the unbiased tracers to a halo mock distribution does not provide success- full results, suggesting that the Vdn model requires to be reconsidered when dealing with biased samples. Thus, we test two alternative exten- sions of the model and find that two scaling relations exist: both the Dark Matter void radii and the underlying Dark Matter density contrast scale with the halo-defined void radii. We use these findings to develop a semi-analytical model which gives promising results.
Resumo:
This work is an investigation into collimator designs for a deuterium-deuterium (DD) neutron generator for an inexpensive and compact neutron imaging system that can be implemented in a hospital. The envisioned application is for a spectroscopic imaging technique called neutron stimulated emission computed tomography (NSECT).
Previous NSECT studies have been performed using a Van-de-Graaff accelerator at the Triangle Universities Nuclear Laboratory (TUNL) in Duke University. This facility has provided invaluable research into the development of NSECT. To transition the current imaging method into a clinically feasible system, there is a need for a high-intensity fast neutron source that can produce collimated beams. The DD neutron generator from Adelphi Technologies Inc. is being explored as a possible candidate to provide the uncollimated neutrons. This DD generator is a compact source that produces 2.5 MeV fast neutrons with intensities of 1012 n/s (4π). The neutron energy is sufficient to excite most isotopes of interest in the body with the exception of carbon and oxygen. However, a special collimator is needed to collimate the 4π neutron emission into a narrow beam. This work describes the development and evaluation of a series of collimator designs to collimate the DD generator for narrow beams suitable for NSECT imaging.
A neutron collimator made of high-density polyethylene (HDPE) and lead was modeled and simulated using the GEANT4 toolkit. The collimator was designed as a 52 x 52 x 52 cm3 HDPE block coupled with 1 cm lead shielding. Non-tapering (cylindrical) and tapering (conical) opening designs were modeled into the collimator to permit passage of neutrons. The shape, size, and geometry of the aperture were varied to assess the effects on the collimated neutron beam. Parameters varied were: inlet diameter (1-5 cm), outlet diameter (1-5 cm), aperture diameter (0.5-1.5 cm), and aperture placement (13-39 cm). For each combination of collimator parameters, the spatial and energy distributions of neutrons and gammas were tracked and analyzed to determine three performance parameters: neutron beam-width, primary neutron flux, and the output quality. To evaluate these parameters, the simulated neutron beams are then regenerated for a NSECT breast scan. Scan involved a realistic breast lesion implanted into an anthropomorphic female phantom.
This work indicates potential for collimating and shielding a DD neutron generator for use in a clinical NSECT system. The proposed collimator designs produced a well-collimated neutron beam that can be used for NSECT breast imaging. The aperture diameter showed a strong correlation to the beam-width, where the collimated neutron beam-width was about 10% larger than the physical aperture diameter. In addition, a collimator opening consisting of a tapering inlet and cylindrical outlet allowed greater neutron throughput when compared to a simple cylindrical opening. The tapering inlet design can allow additional neutron throughput when the neck is placed farther from the source. On the other hand, the tapering designs also decrease output quality (i.e. increase in stray neutrons outside the primary collimated beam). All collimators are cataloged in measures of beam-width, neutron flux, and output quality. For a particular NSECT application, an optimal choice should be based on the collimator specifications listed in this work.
Resumo:
Computational fluid dynamic (CFD) studies of blood flow in cerebrovascular aneurysms have potential to improve patient treatment planning by enabling clinicians and engineers to model patient-specific geometries and compute predictors and risks prior to neurovascular intervention. However, the use of patient-specific computational models in clinical settings is unfeasible due to their complexity, computationally intensive and time-consuming nature. An important factor contributing to this challenge is the choice of outlet boundary conditions, which often involves a trade-off between physiological accuracy, patient-specificity, simplicity and speed. In this study, we analyze how resistance and impedance outlet boundary conditions affect blood flow velocities, wall shear stresses and pressure distributions in a patient-specific model of a cerebrovascular aneurysm. We also use geometrical manipulation techniques to obtain a model of the patient’s vasculature prior to aneurysm development, and study how forces and stresses may have been involved in the initiation of aneurysm growth. Our CFD results show that the nature of the prescribed outlet boundary conditions is not as important as the relative distributions of blood flow through each outlet branch. As long as the appropriate parameters are chosen to keep these flow distributions consistent with physiology, resistance boundary conditions, which are simpler, easier to use and more practical than their impedance counterparts, are sufficient to study aneurysm pathophysiology, since they predict very similar wall shear stresses, time-averaged wall shear stresses, time-averaged pressures, and blood flow patterns and velocities. The only situations where the use of impedance boundary conditions should be prioritized is if pressure waveforms are being analyzed, or if local pressure distributions are being evaluated at specific time points, especially at peak systole, where the use of resistance boundary conditions leads to unnaturally large pressure pulses. In addition, we show that in this specific patient, the region of the blood vessel where the neck of the aneurysm developed was subject to abnormally high wall shear stresses, and that regions surrounding blebs on the aneurysmal surface were subject to low, oscillatory wall shear stresses. Computational models using resistance outlet boundary conditions may be suitable to study patient-specific aneurysm progression in a clinical setting, although several other challenges must be addressed before these tools can be applied clinically.
Resumo:
I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.
In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.
Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.
I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and
discuss some implications for capital regulation policy and stress testing.
Resumo:
The Amazon Basin plays key role in atmospheric chemistry, biodiversity and climate change. In this study we applied nanoelectrospray (nanoESI) ultra-high-resolution mass spectrometry (UHRMS) for the analysis of the organic fraction of PM2.5 aerosol samples collected during dry and wet seasons at a site in central Amazonia receiving background air masses, biomass burning and urban pollution. Comprehensive mass spectral data evaluation methods (e.g. Kendrick mass defect, Van Krevelen diagrams, carbon oxidation state and aromaticity equivalent) were used to identify compound classes and mass distributions of the detected species. Nitrogen- and/or sulfur-containing organic species contributed up to 60 % of the total identified number of formulae. A large number of molecular formulae in organic aerosol (OA) were attributed to later-generation nitrogen- and sulfur-containing oxidation products, suggesting that OA composition is affected by biomass burning and other, potentially anthropogenic, sources. Isoprene-derived organosulfate (IEPOX-OS) was found to be the most dominant ion in most of the analysed samples and strongly followed the concentration trends of the gas-phase anthropogenic tracers confirming its mixed anthropogenic–biogenic origin. The presence of oxidised aromatic and nitro-aromatic compounds in the samples suggested a strong influence from biomass burning especially during the dry period. Aerosol samples from the dry period and under enhanced biomass burning conditions contained a large number of molecules with high carbon oxidation state and an increased number of aromatic compounds compared to that from the wet period. The results of this work demonstrate that the studied site is influenced not only by biogenic emissions from the forest but also by biomass burning and potentially other anthropogenic emissions from the neighbouring urban environments.
Resumo:
The formation of industrial clusters is critical for sustained economic growth. We identify the manufacturing clusters in Vietnam, using the Mori and Smith (2013) method, which indicates the spatial pattern of industrial agglomerations using the global extent (GE) and local density (LD) indices. Spatial pattern identification is extremely helpful because industrial clusters are often spread over a wide geographical area and the GE and LD indices—along with cluster mapping—display how the respective clusters fit into specific spatial patterns.
Resumo:
Seasonal depth stratified plankton tows, sediment traps and core tops taken from the same stations along a transect at 29°N off NW Africa are used to describe the seasonal succession, the depth habitats and the oxygen isotope ratios (delta18O(shell)) of five planktic foraminiferal species. Both the delta18O(shell) and shell concentration profiles show variations in seasonal depth habitats of individual species. None of the species maintain a specific habitat depth exclusively within the surface mixed layer (SML), within the thermocline, or beneath the thermocline. Globigerinoides ruber (white) and (pink) occur with moderate abundance throughout the year along the transect, with highest abundances in the winter and summer/fall season, respectively. The average delta18O(shell) of G. ruber (w) from surface sediments is similar to the delta18O(shell) values measured from the sediment-trap samples during winter. However, the delta18O(shell) of G. ruber (w) underestimates sea surface temperature (SST) by 2 °C in winter and by 4 °C during summer/fall indicating an extension of the calcification/depth habitat into colder thermocline waters. Globigerinoides ruber (p) continues to calcify below the SML as well, particularly in summer/fall when the chlorophyll maximum is found within the thermocline. Its vertical distribution results in delta18O(shell) values that underestimate SST by 2 °C. Shell fluxes of Globigerina bulloides are highest in summer/fall, where it lives and calcifies in association with the deep chlorophyll maximum found within the thermocline. Pulleniatina obliquiloculata and Globorotalia truncatulinoides, dwelling and calcifying a part of their lives in the winter SML, record winter thermocline (~180 m) and deep surface water (~350 m) temperatures, respectively. Our observations define the seasonal and vertical distribution of multiple species of foraminifera and the acquisition of their delta18O(shell).
Resumo:
We present measurements of pCO2, O2 concentration, biological oxygen saturation (Delta O2/Ar) and N2 saturation (Delta N2) in Southern Ocean surface waters during austral summer, 2010-2011. Phytoplankton biomass varied strongly across distinct hydrographic zones, with high chlorophyll a (Chla) concentrations in regions of frontal mixing and sea-ice melt. pCO2 and Delta O2 /Ar exhibited large spatial gradients (range 90 to 450 µatm and -10 to 60%, respectively) and co-varied strongly with Chla. However, the ratio of biological O2 accumulation to dissolved inorganic carbon (DIC) drawdown was significantly lower than expected from photosynthetic stoichiometry, reflecting the differential time-scales of O2 and CO2 air-sea equilibration. We measured significant oceanic CO2 uptake, with a mean air-sea flux (~ -20 mmol m-2 d-1) that significantly exceeded regional climatological values. N2 was mostly supersaturated in surface waters (mean Delta N2 of +2.5 %), while physical processes resulted in both supersaturation and undersaturation of mixed layer O2 (mean Delta O2phys = 2.1 %). Box model calculations were able to reproduce much of the spatial variability of Delta N2 and Delta O2phys along the cruise track, demonstrating significant effects of air-sea exchange processes (e.g. atmospheric pressure changes and bubble injection) and mixed layer entrainment on surface gas disequilibria. Net community production (NCP) derived from entrainment-corrected surface Delta O2 /Ar data, ranged from ~ -40 to > 300 mmol O2 m-2 d-1 and showed good coherence with independent NCP estimates based on seasonal mixed layer DIC deficits. Elevated NCP was observed in hydrographic frontal zones and regions of sea-ice melt with shallow mixed layer depths, reflecting the importance of mixing in controlling surface water light and nutrient availability.
Resumo:
Palmer Deep is a series of three glacially overdeepened basins on the Antarctic Peninsula shelf, ~20 km southwest of Anvers Island. Site 1098 (64°51.72'S, 64°12.48'W) was drilled in the shallowest basin, Basin I, at 1012 m water depth. The sediment recovered was primarily laminated, siliceous, biogenic, pelagic muds alternating with siliciclastic hemipelagic sediments (Barker, Camerlenghi, Acton, et al., 1999). Sedimentation rates of 0.1725 cm/yr in the upper 25 m and 0.7-0.80 cm/yr in the lower 25 m of the core have been estimated from 14C (Domack et al., 2001). The oldest datable sediments have an age of ~13 ka and were underlain by diamicton sediments of the last glacial maximum (Domack et al., 2001). The large-scale water-mass distribution and circulation in the vicinity of Palmer Deep is dominated by Circumpolar Deep Water (CDW) below 200 m (Hofmann et al., 1996). Palmer Deep is too far from the coast to be influenced by glacial meltwater and cold-tongue generation associated with it (Domack and Williams, 1990; Dixon and Domack, 1991). Circulation patterns in the Palmer Deep area are not well understood, but evidence suggests southward flow across Palmer Deep from Anvers Island to Renaud Island (Kock and Stein, 1978). The water south of Anvers Island is nearly open with loose pack ice from February through May. The area is covered with sea ice beginning in June (Gloersen et al., 1992; Leventer et al., 1996). Micropaleontologic data from the work of Leventer et al. (1996) on a 9-m piston core has revealed circulation and climate patterns for the past 3700 yr in the Palmer Deep. The benthic foraminifer assemblage is dominated by two taxa, Bulimina aculeata and Bolivina pseudopunctata, which are inversely related. High relative abundances of B. aculeata occur cyclically over a period of ~230 yr. The assemblage associated with high abundance of B. aculeata in Palmer Deep resembles that from the Bellingshausen shelf, which is associated with CDW. In addition to the faunal evidence, hydrographic data indicate incursions of CDW into Palmer Deep (Leventer et al., 1996). A distinctive diatom assemblage dominated by a single genus was associated with peaks in B. aculeata, whereas a few different assemblages were associated with lows in B. aculeata. Leventer et al. (1996) interpreted the variability in diatom assemblages as an indication of changes in productivity associated with changes in water column stability. Abelmann and Gowing (1997) studied the horizontal and vertical distributions of radiolarians in the Atlantic sector of the Southern Ocean. They show that the spatial distribution of radiolarian assemblages reflects hydrographic boundaries. In a transect from the subtropical Atlantic to polar Antarctic zones, radiolarians in the upper 1000 m of the water column occurred in distinct surface and deep-living assemblages related to water depth, temperature, salinity, and nutrient content. Living assemblages resembled those preserved in underlying surface sediments (Abelmann and Gowing, 1997). Circumantarctic coastal sediments from neritic environments contained a distinctive assemblage dominated by the Phormacantha hystrix/Plectacantha oikiskos group and Rhizoplegma boreale (Nishimura et al., 1997). Low diversity and species compositions distinguished the coastal sediments from the typical pelagic Antarctic assemblages. Factors that controlled the assemblages were water depth, proximity to the coast, occurrence of sea ice, and steepness of topography, rather than temperature and salinity. Nishimura et al. (1997) found a gradient of sorts from deep-water sites containing diverse assemblages typical of pelagic environments to coastal sites with low diversity assemblages dominated by P. hystrix/P. oikiskos group and R. boreale. In general, sites between these two extremes had increased proportions of the coastal assemblage with decreasing water depth (Nishimura et al., 1997). At a site near Hole 1098 (GC905), they showed that the relative abundance of the coastal assemblage increased downcore (Nishimura et al., 1997). The purpose of the research presented here was to make a cursory investigation into the radiolarian assemblages as possible paleoenvironmental indicators.
Resumo:
The Dirichlet distribution is a multivariate generalization of the Beta distribution. It is an important multivariate continuous distribution in probability and statistics. In this report, we review the Dirichlet distribution and study its properties, including statistical and information-theoretic quantities involving this distribution. Also, relationships between the Dirichlet distribution and other distributions are discussed. There are some different ways to think about generating random variables with a Dirichlet distribution. The stick-breaking approach and the Pólya urn method are discussed. In Bayesian statistics, the Dirichlet distribution and the generalized Dirichlet distribution can both be a conjugate prior for the Multinomial distribution. The Dirichlet distribution has many applications in different fields. We focus on the unsupervised learning of a finite mixture model based on the Dirichlet distribution. The Initialization Algorithm and Dirichlet Mixture Estimation Algorithm are both reviewed for estimating the parameters of a Dirichlet mixture. Three experimental results are shown for the estimation of artificial histograms, summarization of image databases and human skin detection.
Resumo:
BACKGROUND: A pretrial clinical improvement project for the BOOST-II UK trial of oxygen saturation targeting revealed an artefact affecting saturation profiles obtained from the Masimo Set Radical pulse oximeter.
METHODS: Saturation was recorded every 10 s for up to 2 weeks in 176 oxygen dependent preterm infants in 35 UK and Irish neonatal units between August 2006 and April 2009 using Masimo SET Radical pulse oximeters. Frequency distributions of % time at each saturation were plotted. An artefact affecting the saturation distribution was found to be attributable to the oximeter's internal calibration algorithm. Revised software was installed and saturation distributions obtained were compared with four other current oximeters in paired studies.
RESULTS: There was a reduction in saturation values of 87-90%. Values above 87% were elevated by up to 2%, giving a relative excess of higher values. The software revision eliminated this, improving the distribution of saturation values. In paired comparisons with four current commercially available oximeters, Masimo oximeters with the revised software returned similar saturation distributions.
CONCLUSIONS: A characteristic of the software algorithm reduces the frequency of saturations of 87-90% and increases the frequency of higher values returned by the Masimo SET Radical pulse oximeter. This effect, which remains within the recommended standards for accuracy, is removed by installing revised software (board firmware V4.8 or higher). Because this observation is likely to influence oxygen targeting, it should be considered in the analysis of the oxygen trial results to maximise their generalisability.
Resumo:
In urban areas, interchange spacing and the adequacy of design for weaving, merge, and diverge areas can significantly influence available capacity. Traffic microsimulation tools allow detailed analyses of these critical areas in complex locations that often yield results that differ from the generalized approach of the Highway Capacity Manual. In order to obtain valid results, various inputs should be calibrated to local conditions. This project investigated basic calibration factors for the simulation of traffic conditions within an urban freeway merge/diverge environment. By collecting and analyzing urban freeway traffic data from multiple sources, specific Iowa-based calibration factors for use in VISSIM were developed. In particular, a repeatable methodology for collecting standstill distance and headway/time gap data on urban freeways was applied to locations throughout the state of Iowa. This collection process relies on the manual processing of video for standstill distances and individual vehicle data from radar detectors to measure the headways/time gaps. By comparing the data collected from different locations, it was found that standstill distances vary by location and lead-follow vehicle types. Headways and time gaps were found to be consistent within the same driver population and across different driver populations when the conditions were similar. Both standstill distance and headway/time gap were found to follow fairly dispersed and skewed distributions. Therefore, it is recommended that microsimulation models be modified to include the option for standstill distance and headway/time gap to follow distributions as well as be set separately for different vehicle classes. In addition, for the driving behavior parameters that cannot be easily collected, a sensitivity analysis was conducted to examine the impact of these parameters on the capacity of the facility. The sensitivity analysis results can be used as a reference to manually adjust parameters to match the simulation results to the observed traffic conditions. A well-calibrated microsimulation model can enable a higher level of fidelity in modeling traffic behavior and serve to improve decision making in balancing need with investment.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Valveless pulsejets are extremely simple aircraft engines; essentially cleverly designed tubes with no moving parts. These engines utilize pressure waves, instead of machinery, for thrust generation, and have demonstrated thrust-to-weight ratios over 8 and thrust specific fuel consumption levels below 1 lbm/lbf-hr – performance levels that can rival many gas turbines. Despite their simplicity and competitive performance, they have not seen widespread application due to extremely high noise and vibration levels, which have persisted as an unresolved challenge primarily due to a lack of fundamental insight into the operation of these engines. This thesis develops two theories for pulsejet operation (both based on electro-acoustic analogies) that predict measurements better than any previous theory reported in the literature, and then uses them to devise and experimentally validate effective noise reduction strategies. The first theory analyzes valveless pulsejets as acoustic ducts with axially varying area and temperature. An electro-acoustic analogy is used to calculate longitudinal mode frequencies and shapes for prescribed area and temperature distributions inside an engine. Predicted operating frequencies match experimental values to within 6% with the use of appropriate end corrections. Mode shapes are predicted and used to develop strategies for suppressing higher modes that are responsible for much of the perceived noise. These strategies are verified experimentally and via comparison to existing models/data for valveless pulsejets in the literature. The second theory analyzes valveless pulsejets as acoustic systems/circuits in which each engine component is represented by an acoustic impedance. These are assembled to form an equivalent circuit for the engine that is solved to find the frequency response. The theory is used to predict the behavior of two interacting pulsejet engines. It is validated via comparison to experiment and data in the literature. The technique is then used to develop and experimentally verify a method for operating two engines in anti-phase without interfering with thrust production. Finally, Helmholtz resonators are used to suppress higher order modes that inhibit noise suppression via anti-phasing. Experiments show that the acoustic output of two resonator-equipped pulsejets operating in anti-phase is 9 dBA less than the acoustic output of a single pulsejet.