960 resultados para Maximum Degree Proximity algorithm (MAX-DPA)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using bathymetric transects of surface sediments underlying similar sea surface temperatures but exposed to increasing dissolution, we examined the processes which affect the relationship between foraminiferal Mg/Ca and d18O. We found that Globigerinoides saccculifer calcifies over a relatively large range of water depth and that this is apparent in their Mg content. On the seafloor, foraminiferal Mg/Ca is substantially altered by dissolution with the degree of alteration increasing with water depth. Selective dissolution of the chamber calcite, formed in surface waters, shifts the shell's bulk Mg/Ca and d18O toward the chemistries of the secondary crust acquired in colder thermocline waters. The magnitude of this shift depends on both the range of temperatures over which the shell calcified and the degree to which it is subsequently dissolved. In spite of this shift the initial relationship between Mg/Ca and d18O, determined by their temperature dependence, is maintained. We conclude that paired measurements of d18O and Mg/Ca can be used for reconstructing d18Owater, though care must be taken to determine where in the water column the reconstruction applies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distribution of iron and manganese speciations in ocean sediments of a section from the coast of Japan to the open Pacific Ocean is under consideration. Determinations of total iron, as well as of reactive iron contents and of total manganese, as well as of Mn4+ contents have been done. Significant increase of total Fe content in sediments from the coast to the pelagic zone occurs without noticeable increase in reactive Fe content. Presence of layers of volcanic and terrigenous coarse clastic material in clayey sediments results to sharp change in iron content. Manganese content increases from near coastal to pelagic sediments more than 10 times; oxidation degree of sediments also increases. There are three types of bottom sediments different by contents of iron and manganese forms: reduced, oxidized (red clay), and transitional. Content of total Fe is almost does not change with depth in sediments, content of reactive Fe increases in reduced sediments, and decreases in oxidized ones. Manganese content in red clay mass increases several times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Antarctic Pack Ice Seal (APIS) Program was initiated in 1994 to estimate the abundance of four species of Antarctic phocids: the crabeater seal Lobodon carcinophaga, Weddell seal Leptonychotes weddellii, Ross seal Ommatophoca rossii and leopard seal Hydrurga leptonyx and to identify ecological relationships and habitat use patterns. The Atlantic sector of the Southern Ocean (the eastern sector of the Weddell Sea) was surveyed by research teams from Germany, Norway and South Africa using a range of aerial methods over five austral summers between 1996-1997 and 2000-2001. We used these observations to model densities of seals in the area, taking into account haul-out probabilities, survey-specific sighting probabilities and covariates derived from satellite-based ice concentrations and bathymetry. These models predicted the total abundance over the area bounded by the surveys (30°W and 10°E). In this sector of the coast, we estimated seal abundances of: 514 (95 % CI 337-886) x 10**3 crabeater seals, 60.0 (43.2-94.4) x 10**3 Weddell seals and 13.2 (5.50-39.7) x 10**3 leopard seals. The crabeater seal densities, approximately 14,000 seals per degree longitude, are similar to estimates obtained by surveys in the Pacific and Indian sectors by other APIS researchers. Very few Ross seals were observed (24 total), leading to a conservative estimate of 830 (119-2894) individuals over the study area. These results provide an important baseline against which to compare future changes in seal distribution and abundance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My dissertation has three chapters which develop and apply microeconometric tech- niques to empirically relevant problems. All the chapters examines the robustness issues (e.g., measurement error and model misspecification) in the econometric anal- ysis. The first chapter studies the identifying power of an instrumental variable in the nonparametric heterogeneous treatment effect framework when a binary treat- ment variable is mismeasured and endogenous. I characterize the sharp identified set for the local average treatment effect under the following two assumptions: (1) the exclusion restriction of an instrument and (2) deterministic monotonicity of the true treatment variable in the instrument. The identification strategy allows for general measurement error. Notably, (i) the measurement error is nonclassical, (ii) it can be endogenous, and (iii) no assumptions are imposed on the marginal distribution of the measurement error, so that I do not need to assume the accuracy of the measure- ment. Based on the partial identification result, I provide a consistent confidence interval for the local average treatment effect with uniformly valid size control. I also show that the identification strategy can incorporate repeated measurements to narrow the identified set, even if the repeated measurements themselves are endoge- nous. Using the the National Longitudinal Study of the High School Class of 1972, I demonstrate that my new methodology can produce nontrivial bounds for the return to college attendance when attendance is mismeasured and endogenous.

The second chapter, which is a part of a coauthored project with Federico Bugni, considers the problem of inference in dynamic discrete choice problems when the structural model is locally misspecified. We consider two popular classes of estimators for dynamic discrete choice models: K-step maximum likelihood estimators (K-ML) and K-step minimum distance estimators (K-MD), where K denotes the number of policy iterations employed in the estimation problem. These estimator classes include popular estimators such as Rust (1987)’s nested fixed point estimator, Hotz and Miller (1993)’s conditional choice probability estimator, Aguirregabiria and Mira (2002)’s nested algorithm estimator, and Pesendorfer and Schmidt-Dengler (2008)’s least squares estimator. We derive and compare the asymptotic distributions of K- ML and K-MD estimators when the model is arbitrarily locally misspecified and we obtain three main results. In the absence of misspecification, Aguirregabiria and Mira (2002) show that all K-ML estimators are asymptotically equivalent regardless of the choice of K. Our first result shows that this finding extends to a locally misspecified model, regardless of the degree of local misspecification. As a second result, we show that an analogous result holds for all K-MD estimators, i.e., all K- MD estimator are asymptotically equivalent regardless of the choice of K. Our third and final result is to compare K-MD and K-ML estimators in terms of asymptotic mean squared error. Under local misspecification, the optimally weighted K-MD estimator depends on the unknown asymptotic bias and is no longer feasible. In turn, feasible K-MD estimators could have an asymptotic mean squared error that is higher or lower than that of the K-ML estimators. To demonstrate the relevance of our asymptotic analysis, we illustrate our findings using in a simulation exercise based on a misspecified version of Rust (1987) bus engine problem.

The last chapter investigates the causal effect of the Omnibus Budget Reconcil- iation Act of 1993, which caused the biggest change to the EITC in its history, on unemployment and labor force participation among single mothers. Unemployment and labor force participation are difficult to define for a few reasons, for example, be- cause of marginally attached workers. Instead of searching for the unique definition for each of these two concepts, this chapter bounds unemployment and labor force participation by observable variables and, as a result, considers various competing definitions of these two concepts simultaneously. This bounding strategy leads to partial identification of the treatment effect. The inference results depend on the construction of the bounds, but they imply positive effect on labor force participa- tion and negligible effect on unemployment. The results imply that the difference- in-difference result based on the BLS definition of unemployment can be misleading

due to misclassification of unemployment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern Atlantic Ocean, dominated by the interactions of North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW), plays a key role in redistributing heat from the Southern to the Northern Hemisphere. In order to reconstruct the evolution of the relative importance of these two water masses, the NADW/AABW transition, reflected by the calcite lysocline, was investigated by the Globigerina bulloides dissolution index (BDX?). The depth level of the Late Glacial Maximum (LGM) calcite lysocline was elevated by several hundred metres, indicating a more corrosive water mass present at modern NADW level. Overall, the small range of BDX? data and the gradual decrease in preservation below the calcite lysocline point to a less stratified Atlantic Ocean during the LGM. Similar preservation patterns in the West and East Atlantic demonstrate that the modern west-east asymmetry did not exist due to an expansion of southern deep waters compensating for the decrease in NADW formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Respiration and ammonium excretion rates at different oxygen partial pressure were measured for calanoid copepods and euphausiids from the Eastern Tropical South Pacific and the Eastern Tropical North Atlantic. All specimens used for experiments were caught in the upper 400 m of the water column and only animals appearing unharmed and fit were used for experiments. Specimens were sorted, identified and transferred into aquaria with filtered, well-oxygenated seawater immediately after the catch and maintained for 1 to 13 hours prior to physiological experiments at the respective experimental temperature. Maintenance and physiological experiments were conducted in darkness in temperature-controlled incubators at 11, 13 or 23 degree C (±1). Before and during experiments, animals were not fed. Respiration and ammonium excretion rate measurements (both in µmol h-1 gDW-1) at varying oxygen concentrations were conducted in 12 to 60 mL gas-tight glass bottles. These were equipped with oxygen microsensors (ø 3 mm, PreSens Precision Sensing GmbH, Regensburg, Germany) attached to the inner wall of the bottles to monitor oxygen concentrations non-invasively. Read-out of oxygen concentrations was conducted using multi-channel fiber optic oxygen transmitters (Oxy-4 and Oxy-10 mini, PreSens Precision Sensing GmbH, Regensburg, Germany) that were connected via optical fibers to the outside of the bottles directly above the oxygen microsensor spots. Measurements were started at pre-adjusted oxygen and carbon dioxide levels. For this, seawater stocks with adjusted pO2 and pCO2 were prepared by equilibrating 3 to 4 L of filtered (0.2 µm filter Whatman GFF filter) and UV - sterilized (Aqua Cristal UV C 5 Watt, JBL GmbH & Co. KG, Neuhofen, Germany) water with premixed gases (certified gas mixtures from Air Liquide) for 4 hours at the respective experimental temperature. pCO2 levels were chosen to mimic the environmental pCO2 in the ETSP OMZ or the ETNA OMZ. Experimental runs were conducted with 11 to 15 trial incubations (1 or 2 animals per incubation bottle and three different treatment levels) and three animal-free control incubations (one per experimental treatment). During each run, experimental treatments comprised 100% air saturation as well as one reduced air saturation level with and without CO2. Oxygen concentrations in the incubation bottles were recorded every 5 min using the fiber-optic microsensor system and data recording for respiration rate determination was started immediately after all animals were transferred. Respiration rates were calculated from the slope of oxygen decrease over selected time intervals. Chosen time intervals were 20 to 105 min long. No respiration rate was calculated for the first 20 to 60 min after animal transfer to avoid the impact of enhanced activity of the animal or changes in the bottle water temperature during initial handling on the respiration rates and oxygen readings. Respiration rates were obtained over a maximum of 16 hours incubation time and slopes were linear at normoxia to mild hypoxia. Respiration rates in animal-free control bottles were used to correct for microbial activity. These rates were < 2% of animal respiration rates at normoxia. Samples for the measurement of ammonium concentrations were taken after 2 to 10 hours incubation time. Ammonium concentration was determined fluorimetrically (Holmes et al., 1999). Ammonium excretion was calculated as the concentration difference between incubation and animal-free control bottles. Some specimens died during the respiration and excretion rate measurements, as indicated by a cessation of respiration. No excretion rate measurements were conducted in this case, but the oxygen level at which the animal died was noted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aragonitic clathrites are methane-derived precipitates that are found at sites of massive near-seafloor gas hydrate (clathrate) accumulations at the summit of southern Hydrate Ridge, Cascadia margin. These platy carbonate precipitates form inside or in proximity to gas hydrate, which in our study site currently coexists with a fluid that is highly enriched in dissolved ions as salts are excluded during gas hydrate formation. The clathrites record the preferential incorporation of 18O into the hydrate structure and hence the enrichment of 16O in the surrounding brine. We measured d18O values as high as 2.27 per mil relative to Peedee belemnite that correspond to a fluid composition of -1.18 per mil relative to standard mean ocean water. The same trend can be observed in Ca isotopes. Ongoing clathrite precipitation causes enrichment of the 44Ca in the fluid and hence in the carbonates. Carbon isotopes confirm a methane source for the carbonates. Our triple stable isotope approach that uses the three main components of carbonates (Ca, C, O) provides insight into multiple parameters influencing the isotopic composition of the pore water and hence the isotopic composition of the clathrites. This approach provides a tool to monitor the geochemical processes during clathrate and clathrite formation, thus recording the evolution of the geochemical environment of gas hydrate systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lamination and burrowing patterns in 17 box cores were analyzed with the aid of X-ray photographs and thin sections. A standardized method of log plotting made statistical analysis of the data possible. Several 'structure types' were established, although it was realized that the boundaries are purely arbitrary divisions in what can sometimes be a continuous sequence. In the transition zone between marginal sand facies and fine-grained basin facies, muddy sediment is found which contains particularly well differentiated, alternating laminae. This zone is also characterized by layers rich in plant remains. The alternation of laminae shows a high degree of statistical scattering. Even though a small degree of cyclic periodicity could be defined, it was impossible to correlate individual layers from core to core across the bay. However, through a statistical handling of the plots, zones could be separated on the basis of the number of sand layers they contained. These more or minder sandy zones clarified the bottom reflections seen in the records of the echograph from the area. The manner of facies change across the bay, suggests that no strong bottom currents are effective in the Eckernförde Bay. The marked asymmetry between the north and south flanks of the profile can be attributed to the stronger action of waves on the more exposed areas. Grain size analyses were made from the more homogeneous units found in a core from the transition-facies zone. The results indicate that the most pronounced differences between layers appear in the silt range, and although the differences are slight, they are statistically significant. Layers rich in plant remains were wet-sieved in order to separate the plant detritus. This was than analyzed in a sediment settling balance and found to be hydrodynamically equivalent to a well-sorted, finegrained sand. A special, rhythmic cross-bedding type with dimensions in the millimeter range, has been named 'Crypto-cross-lamination' and is thought to represent rapid sedimentation in an area where only very weak bottom currents are present. It is found only in the deepest part of the basin. Relatively large sand grains, scattered within layers of clayey-silty matrix, seem to be transported by flotation. Thin section examination showed that the inner part of Eckernförder Bay carbonate grains (e. g. Foraminifera shells) were preserved throughout the cores, while in the outer part of the bay they were not present. Well defined tracks and burrows are relatively rare in all of the facies in comparision to the generally strongly developed deformation burrowing. The application of special measures for the deformation burrowing allowed to plot their intensity in profile for each core. A degree of regularity could be found in these burrowing intensity plots, with higher values appearing in the sandy facies, but with no clear differences between sand and silt layers in the transition facies. Small sections in the profiles of the deepest part of the bay show no bioturbation at all.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contents of free lipids in the upper layers of slightly siliceous diatomaceous oozes from the South Atlantic and of calcareous foraminiferal oozes, of coral sediments and of red clays from the western tropical Pacific amount varies from 0.014 to 0.057% of dry sediment. Their content is inversely proportional to total content of organic matter. Relative content of low-polar compounds in total amount of lipids and content of hydrocarbons, fatty acids, and sterols in the composition of these compounds can serve as an index of degree of transformation of organic matter in sediment because these compounds are resistant to various degree to microbial and hydrolytic decomposition and, consequently, are selectively preserved under conditions of biodegradation of organic compounds during oxydation-reduction processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the measurement of the depth of maximum, X-max, of the longitudinal development of air showers induced by cosmic rays. Almost 4000 events above 10(18) eV observed by the fluorescence detector of the Pierre Auger Observatory in coincidence with at least one surface detector station are selected for the analysis. The average shower maximum was found to evolve with energy at a rate of (106 +/- 35-21) g/cm(2)/decade below 10(18.24) +/- (0.05) eV, and d24 +/- 3 g/cm(2)/ecade above this energy. The measured shower-to-shower fluctuations decrease from about 55 to 26 g/cm(2). The interpretation of these results in terms of the cosmic ray mass composition is briefly discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective Structured Clinical Examinations (OSCE) improved communication skills of student of Pharmacology in Medicine and Podiatry degree. Bellido I, Blanco E, Gomez-Luque A. D. Pharmacology and Clinical Therapeutic. Medicine School. University of Malaga. IBIMA. Malaga, Spain. Objective Structured Clinical Examinations (OSCEs) are versatile multipurpose evaluative tools that can be utilized to assess health care professionals in a clinical setting including communication skills and ability to handle unpredictable patient behavior, which usually are not included in the traditional clinical exam. To designee and perform OSCEs by student is a novelty that really like to the students and may improve their arguing and planning capacities and their communication skills. Aim: To evaluate the impact of designing, developing and presenting Objective Structured Clinical Examinations (OSCE) by student in the communication skills development and in the learning of medicines in Medicine and Podiatry undergraduate students. Methods: A one-year study in which students were invited to voluntarily form groups (4 students maximum). Each group has to design and perform an OSCE (10 min maximum) showing a clinical situation/problem in which medicines’ use was needed. A clinical history, camera, a mobile-phone's video editor, photos, actors, dolls, simulators or whatever they may use was allowed. The job of each group was supervised and helped by a teacher. The students were invited to present their work to the rest of the class. After each OSCE performance the students were encouraged to ask questions if they wanted to do it. After all the OSCEs performances the students voluntarily answered a satisfaction survey. Results: Students of Pharmacology of Medicine degree and Podiatry degree, N=80, 53.75% female, 21±2.3 years old were enrolled. 26 OSCEs showing a clinical situation or clinical problem were made. The average time spent by students in making the OSCE was 21.5±9 h. The percentage of students which were satisfied with this way of presentation of the OSCE was 89.7%. Conclusion: Objective Structured Clinical Examinations (OSCE) designed and performed by student of Pharmacology of the Medicine and Podiatry Degree improved their communication skills.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.