921 resultados para Cut and paste method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biological detectors, such as canines, are valuable tools used for the rapid identification of illicit materials. However, recent increased scrutiny over the reliability, field accuracy, and the capabilities of each detection canine is currently being evaluated in the legal system. For example, the Supreme Court case, State of Florida v. Harris, discussed the need for continuous monitoring of canine abilities, thresholds, and search capabilities. As a result, the fallibility of canines for detection was brought to light, as well as a need for further research and understanding of canine detection. This study is two-fold, as it looks to not only create new training aids for canines that can be manipulated for dissipation control, but also investigates canine field accuracy to objects with similar odors to illicit materials. It was the goal of this research to improve upon current canine training aid mimics. Sol-gel polymer training aids, imprinted with the active odor of cocaine, were developed. This novel training aid improved upon the longevity of currently existing training aids, while also provided a way to manipulate the polymer network to alter the dissipation rate of the imprinted active odors. The manipulation of the polymer network could allow handlers to control the abundance of odors presented to their canines, familiarizing themselves to their canine’s capabilities and thresholds, thereby increasing the canines’ strength in court. The field accuracy of detection canines was recently called into question during the Supreme Court case, State of Florida v. Jardines, where it was argued that if cocaine’s active odor, methyl benzoate, was found to be produced by the popular landscaping flower, snapdragons, canines will false alert to said flowers. Therefore, snapdragon flowers were grown and tested both in the laboratory and in the field to determine the odors produced by snapdragon flowers; the persistence of these odors once flowers have been cut; and whether detection canines will alert to both growing and cut flowers during a blind search scenario. Results revealed that although methyl benzoate is produced by snapdragon flowers, certified narcotics detection canines can distinguish cocaine’s odor profile from that of snapdragon flowers and will not alert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential of solid phase microextraction (SPME) in the analysis of explosives is demonstrated. A sensitive, rapid, solventless and inexpensive method for the analysis of explosives and explosive odors from solid and liquid samples has been optimized using SPME followed by HPLC and GC/ECD. SPME involves the extraction of the organic components in debris samples into sorbent-coated silica fibers, which can be transferred directly to the injector of a gas chromatograph. SPME/HPLC requires a special desorption apparatus to elute the extracted analyte onto the column at high pressure. Re suits for use of GC[ECD is presented and compared to the results gathered by using HPLC analysis. The relative effects of controllable variables including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time have been optimized for various high explosives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A comprehensive investigation of sensitive ecosystems in South Florida with the main goal of determining the identity, spatial distribution, and sources of both organic biocides and trace elements in different environmental compartments is reported. This study presents the development and validation of a fractionation and isolation method of twelve polar acidic herbicides commonly applied in the vicinity of the study areas, including e.g. 2,4-D, MCPA, dichlorprop, mecroprop, picloram in surface water. Solid phase extraction (SPE) was used to isolate the analytes from abiotic matrices containing large amounts of dissolved organic material. Atmospheric-pressure ionization (API) with electrospray ionization in negative mode (ESP-) in a Quadrupole Ion Trap mass spectrometer was used to perform the characterization of the herbicides of interest. The application of Laser Ablation-ICP-MS methodology in the analysis of soils and sediments is reported in this study. The analytical performance of the method was evaluated on certified standards and real soil and sediment samples. Residential soils were analyzed to evaluate feasibility of using the powerful technique as a routine and rapid method to monitor potential contaminated sites. Forty eight sediments were also collected from semi pristine areas in South Florida to conduct screening of baseline levels of bioavailable elements in support of risk evaluation. The LA-ICP-MS data were used to perform a statistical evaluation of the elemental composition as a tool for environmental forensics. A LA-ICP-MS protocol was also developed and optimized for the elemental analysis of a wide range of elements in polymeric filters containing atmospheric dust. A quantitative strategy based on internal and external standards allowed for a rapid determination of airborne trace elements in filters containing both contemporary African dust and local dust emissions. These distributions were used to qualitative and quantitative assess differences of composition and to establish provenance and fluxes to protected regional ecosystems such as coral reefs and national parks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Produced water is a by-product of offshore oil and gas production, and is released in large volumes when platforms are actively processing crude oil. Some pollutants are not typically removed by conventional oil/water separation methods and are discharged with produced water. Oil and grease can be found dispersed in produced water in the form of tiny droplets, and polycyclic aromatic hydrocarbons (PAHs) are commonly found dissolved in produced water. Both can have acute and chronic toxic effects in marine environments even at low exposure levels. The analysis of the dissolved and dispersed phases are a priority, but effort is required to meet the necessary detection limits. There are several methods for the analysis of produced water for dispersed oil and dissolved PAHs, all of which have advantages and disadvantages. In this work, EPA Method 1664 and APHA Method 5520 C for the determination of oil and grease will be examined and compared. For the detection of PAHs, EPA Method 525 and PAH MIPs will be compared, and results evaluated. APHA Method 5520 C Partition-Infrared Method is a liquid-liquid extraction procedure with IR determination of oil and grease. For analysis on spiked samples of artificial seawater, extraction efficiency ranged from 85 – 97%. Linearity was achieved in the range of 5 – 500 mg/L. This is a single-wavelength method and is unsuitable for quantification of aromatics and other compounds that lack sp³-hybridized carbon atoms. EPA Method 1664 is the liquid-liquid extraction of oil and grease from water samples followed by gravimetric determination. When distilled water spiked with reference oil was extracted by this procedure, extraction efficiency ranged from 28.4 – 86.2%, and %RSD ranged from 7.68 – 38.0%. EPA Method 525 uses solid phase extraction with analysis by GC-MS, and was performed on distilled water and water from St. John’s Harbour, all spiked with naphthalene, fluorene, phenanthrene, and pyrene. The limits of detection in harbour water were 0.144, 3.82, 0.119, and 0.153 g/L respectively. Linearity was obtained in the range of 0.5-10 g/L, and %RSD ranged from 0.36% (fluorene) to 46% (pyrene). Molecularly imprinted polymers (MIPs) are sorbent materials made selective by polymerizing functional monomers and crosslinkers in the presence of a template molecule, usually the analytes of interest or related compounds. They can adsorb and concentrate PAHs from aqueous environments and are combined with methods of analysis including GC-MS, LC-UV-Vis, and desorption electrospray ionization (DESI)- MS. This work examines MIP-based methods as well as those methods previously mentioned which are currently used by the oil and gas industry and government environmental agencies. MIPs are shown to give results consistent with other methods, and are a low-cost alternative improving ease, throughput, and sensitivity. PAH MIPs were used to determine naphthalene spiked into ASTM artificial seawater, as well as produced water from an offshore oil and gas operation. Linearity was achieved in the range studied (0.5 – 5 mg/L) for both matrices, with R² = 0.936 for seawater and R² = 0.819 for produced water. The %RSD for seawater ranged from 6.58 – 50.5% and for produced water, from 8.19 – 79.6%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the quantitative study of diatoms and radiolarians, summer sea-surface temperature (SSST) and sea ice distribution were estimated from 122 sediment core localities in the Atlantic, Indian and Pacific sectors of the Southern Ocean to reconstruct the last glacial environment at the EPILOG (19.5-16.0 ka or 23 000-19 000 cal yr. B.P.) time-slice. The statistical methods applied include the Imbrie and Kipp Method, the Modern Analog Technique and the General Additive Model. Summer SSTs reveal greater surface-water cooling than reconstructed by CLIMAP (Geol. Soc. Am. Map Chart. Ser. MC-36 (1981) 1), reaching a maximum (4-5 °C) in the present Subantarctic Zone of the Atlantic and Indian sector. The reconstruction of maximum winter sea ice (WSI) extent is in accordance with CLIMAP, showing an expansion of the WSI field by around 100% compared to the present. Although only limited information is available, the data clearly show that CLIMAP strongly overestimated the glacial summer sea ice extent. As a result of the northward expansion of Antarctic cold waters by 5-10° in latitude and a relatively small displacement of the Subtropical Front, thermal gradients were steepened during the last glacial in the northern zone of the Southern Ocean. Such reconstruction may, however, be inapposite for the Pacific sector. The few data available indicate reduced cooling in the southern Pacific and give suggestion for a non-uniform cooling of the glacial Southern Ocean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantification of the lipid content in liposomal adjuvants for subunit vaccine formulation is of extreme importance, since this concentration impacts both efficacy and stability. In this paper, we outline a high performance liquid chromatography-evaporative light scattering detector (HPLC-ELSD) method that allows for the rapid and simultaneous quantification of lipid concentrations within liposomal systems prepared by three liposomal manufacturing techniques (lipid film hydration, high shear mixing, and microfluidics). The ELSD system was used to quantify four lipids: 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC), cholesterol, dimethyldioctadecylammonium (DDA) bromide, and D-(+)-trehalose 6,6′-dibehenate (TDB). The developed method offers rapidity, high sensitivity, direct linearity, and a good consistency on the responses (R2 > 0.993 for the four lipids tested). The corresponding limit of detection (LOD) and limit of quantification (LOQ) were 0.11 and 0.36 mg/mL (DMPC), 0.02 and 0.80 mg/mL (cholesterol), 0.06 and 0.20 mg/mL (DDA), and 0.05 and 0.16 mg/mL (TDB), respectively. HPLC-ELSD was shown to be a rapid and effective method for the quantification of lipids within liposome formulations without the need for lipid extraction processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a theoretical description of the generation of ultra-short, high-energy pulses in two laser cavities driven by periodic spectral filtering or dispersion management. Critical in driving the intra-cavity dynamics is the nontrivial phase profiles generated and their periodic modification from either spectral filtering or dispersion management. For laser cavities with a spectral filter, the theory gives a simple geometrical description of the intra-cavity dynamics and provides a simple and efficient method for optimizing the laser cavity performance. In the dispersion managed cavity, analysis shows the generated self-similar behavior to be governed by the porous media equation with a rapidly-varying, mean-zero diffusion coefficient whose solution is the well-known Barenblatt similarity solution with parabolic profile.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cyclic phosphazene trimers [N3P3(OC6H5)5OC5H4N·Ti(Cp)2Cl][PF6] (3), [N3P3(OC6H4CH2CN·Ti(Cp)2Cl)6][PF6]6 (4), [N3P3(OC6H4-But)5(OC6H4CH2CN·Ti(Cp)2Cl)][PF6] (5), [N3P3(OC6H5)5C6H4CH2CN·Ru(Cp)(PPh3)2][PF6] (6), [N3P3(OC6H5)5C6H4CH2CN·Fe(Cp)(dppe)][PF6] (7) and N3P3(OC6H5)5OC5H4N·W(CO)5 (8) were prepared and characterized. As a model, the simple compounds [HOC5H5N·Ti(Cp)2Cl]PF6 (1) and [HOC6H4CH2CN·Ti(Cp)2Cl]PF6 (2) were also prepared and characterized. Pyrolysis of the organometallic cyclic trimers in air yields metallic nanostructured materials, which according to transmission and scanning electron microscopy (TEM/SEM), energy-dispersive X-ray microanalysis (EDX), and IR data, can be formulated as either a metal oxide, metal pyrophosphate or a mixture in some cases, depending on the nature and quantity of the metal, characteristics of the organic spacer and the auxiliary substituent attached to the phosphorus cycle. Atomic force microscopy (AFM) data indicate the formation of small island and striate nanostructures. A plausible formation mechanism which involves the formation of a cyclomatrix is proposed, and the pyrolysis of the organometallic cyclic phosphazene polymer as a new and general method for obtaining metallic nanostructured materials is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have harnessed two reactions catalyzed by the enzyme sortase A and applied them to generate new methods for the purification and site-selective modification of recombinant protein therapeutics.

We utilized native peptide ligation —a well-known function of sortase A— to attach a small molecule drug specifically to the carboxy-terminus of a recombinant protein. By combining this reaction with the unique phase behavior of elastin-like polypeptides, we developed a protocol that produces homogenously-labeled protein-small molecule conjugates using only centrifugation. The same reaction can be used to produce unmodified therapeutic proteins simply by substituting a single reactant. The isolated proteins or protein-small molecule conjugates do not have any exogenous purification tags, eliminating the potential influence of these tags on bioactivity. Because both unmodified and modified proteins are produced by a general process that is the same for any protein of interest and does not require any chromatography, the time, effort, and cost associated with protein purification and modification is greatly reduced.

We also developed an innovative and unique method that attaches a tunable number of drug molecules to any recombinant protein of interest in a site-specific manner. Although the ability of sortase A to carry out native peptide ligation is widely used, we demonstrated that Sortase A is also capable of attaching small molecules to proteins through an isopeptide bond at lysine side chains within a unique amino acid sequence. This reaction —isopeptide ligation— is a new site-specific conjugation method that is orthogonal to all available protein-small conjugation technologies and is the first site-specific conjugation method that attaches the payload to lysine residues. We show that isopeptide ligation can be applied broadly to peptides, proteins, and antibodies using a variety of small molecule cargoes to efficiently generate stable conjugates. We thoroughly assessed the site-selectivity of this reaction using a variety of analytical methods and showed that in many cases the reaction is site-specific for lysines in flexible, disordered regions of the substrate proteins. Finally, we showed that isopeptide ligation can be used to create clinically-relevant antibody-drug conjugates that have potent cytotoxicity towards cancerous cells

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.

In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.

Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.

Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.

Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.

To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.

The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.

This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gold nanoparticles (Au NPs) with diameters ranging between 15 and 150 nm have been synthesised in water. 15 and 30 nm Au NPs were obtained by the Turkevich and Frens method using sodium citrate as both a reducing and stabilising agent at high temperature (Au NPs-citrate), while 60, 90 and 150 nm Au NPs were formed using hydroxylamine-o-sulfonic acid (HOS) as a reducing agent for HAuCl4 at room temperature. This new method using HOS is an extension of the approaches previously reported for producing Au NPs with mean diameters above 40 nm by direct reduction. Functionalised polyethylene glycol-based thiol polymers were used to stabilise the pre-synthesised Au NPs. The nanoparticles obtained were characterised using uv-visible spectroscopy, dynamic light scattering (DLS) and transmission electron microscopy (TEM). Further bioconjugation on 15, 30 and 90 nm PEGylated Au NPs were performed by grafting Bovine Serum Albumin, Transferrin and Apolipoprotein E (ApoE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.

Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.

Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.

Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: Alcohol consumption is a leading cause of global suffering. The harms caused by alcohol to the individual, their peers and the society in which they live provokes public health concern. Elevated levels of consumption and consequences have been noted in those aged 18-29 years. University students represent a unique subsection of society within this age group. University authorities have attempted to tackle this issue throughout the past decade. However, the issue persists. Thus, the aim of this study is to contribute to the evidence base for policy and practice in relation to alcohol harm reduction among third-level students in Ireland. Methods: A mixed methods approach was employed. A systematic review of the prevalence of hazardous alcohol consumption among university students in Ireland and the United Kingdom from 2002 to 2014 was conducted. In addition, a narrative synthesis of studies of drinking types evidenced among youths in western societies was undertaken. A cross-sectional study focused on university students’ health and lifestyle behaviours with particular reference to alcohol consumption was undertaken using previously validated instruments. Undergraduate students registered to one university in Ireland were recruited using two separate modes; classroom and online. Studies investigated the impact of mode of data collection, the prevalence of hazardous alcohol consumption and resultant adverse consequences for mental health and wellbeing. In addition a study using a Q-methodology approach was undertaken to gain a deeper understanding of the cultural factors influencing current patterns of alcohol consumption. Data were analysed using IBM SPPS statistics 20, Stata 12, MPLUS and PQ Method. Results: The literature review focusing on students’ alcohol consumption found that there has been both an increase in hazardous alcohol consumption among university students and a convergence of male and female drinking patterns throughout the past decade. Updating this research, the thesis found that two-thirds of university students consume alcohol at a hazardous level, detailing the range of adverse consequences reported by university students in Ireland. Finally, the heterogeneous nature of this drinking was described in a narrative synthesis exposing six types of consumption. The succeeding chapters develop this review further by describing three typologies of consumption, two quantitative and one quali-quantilogical. The quantitative typology describes three types of drinking for men (realistic hedonist, responsible conformer and guarded drinker) and four types for women (realistic hedonist, peer-influenced, responsible conformer and guarded drinker). The quali-quantilogical approach describes four types of consumption. These are defined as the ‘guarded drinker’, the ‘calculated hedonist’, the ‘peer-influenced drinker’ and the ‘inevitable binger’. Discussion: The findings of this thesis highlight the scale of the issue and provide up-to-date estimates of alcohol consumption among university students in Ireland. Hazardous alcohol consumption is associated with a range of harms to self and harms to others in proximity to the alcohol consumer. The classification of drinkers into types signal the necessity for university management, health promotion practitioners and public health policy makers to tackle this issue using a multi-faceted approach.