913 resultados para Kirby and Bauer method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Produced water is a by-product of offshore oil and gas production, and is released in large volumes when platforms are actively processing crude oil. Some pollutants are not typically removed by conventional oil/water separation methods and are discharged with produced water. Oil and grease can be found dispersed in produced water in the form of tiny droplets, and polycyclic aromatic hydrocarbons (PAHs) are commonly found dissolved in produced water. Both can have acute and chronic toxic effects in marine environments even at low exposure levels. The analysis of the dissolved and dispersed phases are a priority, but effort is required to meet the necessary detection limits. There are several methods for the analysis of produced water for dispersed oil and dissolved PAHs, all of which have advantages and disadvantages. In this work, EPA Method 1664 and APHA Method 5520 C for the determination of oil and grease will be examined and compared. For the detection of PAHs, EPA Method 525 and PAH MIPs will be compared, and results evaluated. APHA Method 5520 C Partition-Infrared Method is a liquid-liquid extraction procedure with IR determination of oil and grease. For analysis on spiked samples of artificial seawater, extraction efficiency ranged from 85 – 97%. Linearity was achieved in the range of 5 – 500 mg/L. This is a single-wavelength method and is unsuitable for quantification of aromatics and other compounds that lack sp³-hybridized carbon atoms. EPA Method 1664 is the liquid-liquid extraction of oil and grease from water samples followed by gravimetric determination. When distilled water spiked with reference oil was extracted by this procedure, extraction efficiency ranged from 28.4 – 86.2%, and %RSD ranged from 7.68 – 38.0%. EPA Method 525 uses solid phase extraction with analysis by GC-MS, and was performed on distilled water and water from St. John’s Harbour, all spiked with naphthalene, fluorene, phenanthrene, and pyrene. The limits of detection in harbour water were 0.144, 3.82, 0.119, and 0.153 g/L respectively. Linearity was obtained in the range of 0.5-10 g/L, and %RSD ranged from 0.36% (fluorene) to 46% (pyrene). Molecularly imprinted polymers (MIPs) are sorbent materials made selective by polymerizing functional monomers and crosslinkers in the presence of a template molecule, usually the analytes of interest or related compounds. They can adsorb and concentrate PAHs from aqueous environments and are combined with methods of analysis including GC-MS, LC-UV-Vis, and desorption electrospray ionization (DESI)- MS. This work examines MIP-based methods as well as those methods previously mentioned which are currently used by the oil and gas industry and government environmental agencies. MIPs are shown to give results consistent with other methods, and are a low-cost alternative improving ease, throughput, and sensitivity. PAH MIPs were used to determine naphthalene spiked into ASTM artificial seawater, as well as produced water from an offshore oil and gas operation. Linearity was achieved in the range studied (0.5 – 5 mg/L) for both matrices, with R² = 0.936 for seawater and R² = 0.819 for produced water. The %RSD for seawater ranged from 6.58 – 50.5% and for produced water, from 8.19 – 79.6%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the quantitative study of diatoms and radiolarians, summer sea-surface temperature (SSST) and sea ice distribution were estimated from 122 sediment core localities in the Atlantic, Indian and Pacific sectors of the Southern Ocean to reconstruct the last glacial environment at the EPILOG (19.5-16.0 ka or 23 000-19 000 cal yr. B.P.) time-slice. The statistical methods applied include the Imbrie and Kipp Method, the Modern Analog Technique and the General Additive Model. Summer SSTs reveal greater surface-water cooling than reconstructed by CLIMAP (Geol. Soc. Am. Map Chart. Ser. MC-36 (1981) 1), reaching a maximum (4-5 °C) in the present Subantarctic Zone of the Atlantic and Indian sector. The reconstruction of maximum winter sea ice (WSI) extent is in accordance with CLIMAP, showing an expansion of the WSI field by around 100% compared to the present. Although only limited information is available, the data clearly show that CLIMAP strongly overestimated the glacial summer sea ice extent. As a result of the northward expansion of Antarctic cold waters by 5-10° in latitude and a relatively small displacement of the Subtropical Front, thermal gradients were steepened during the last glacial in the northern zone of the Southern Ocean. Such reconstruction may, however, be inapposite for the Pacific sector. The few data available indicate reduced cooling in the southern Pacific and give suggestion for a non-uniform cooling of the glacial Southern Ocean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantification of the lipid content in liposomal adjuvants for subunit vaccine formulation is of extreme importance, since this concentration impacts both efficacy and stability. In this paper, we outline a high performance liquid chromatography-evaporative light scattering detector (HPLC-ELSD) method that allows for the rapid and simultaneous quantification of lipid concentrations within liposomal systems prepared by three liposomal manufacturing techniques (lipid film hydration, high shear mixing, and microfluidics). The ELSD system was used to quantify four lipids: 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC), cholesterol, dimethyldioctadecylammonium (DDA) bromide, and D-(+)-trehalose 6,6′-dibehenate (TDB). The developed method offers rapidity, high sensitivity, direct linearity, and a good consistency on the responses (R2 > 0.993 for the four lipids tested). The corresponding limit of detection (LOD) and limit of quantification (LOQ) were 0.11 and 0.36 mg/mL (DMPC), 0.02 and 0.80 mg/mL (cholesterol), 0.06 and 0.20 mg/mL (DDA), and 0.05 and 0.16 mg/mL (TDB), respectively. HPLC-ELSD was shown to be a rapid and effective method for the quantification of lipids within liposome formulations without the need for lipid extraction processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a theoretical description of the generation of ultra-short, high-energy pulses in two laser cavities driven by periodic spectral filtering or dispersion management. Critical in driving the intra-cavity dynamics is the nontrivial phase profiles generated and their periodic modification from either spectral filtering or dispersion management. For laser cavities with a spectral filter, the theory gives a simple geometrical description of the intra-cavity dynamics and provides a simple and efficient method for optimizing the laser cavity performance. In the dispersion managed cavity, analysis shows the generated self-similar behavior to be governed by the porous media equation with a rapidly-varying, mean-zero diffusion coefficient whose solution is the well-known Barenblatt similarity solution with parabolic profile.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cyclic phosphazene trimers [N3P3(OC6H5)5OC5H4N·Ti(Cp)2Cl][PF6] (3), [N3P3(OC6H4CH2CN·Ti(Cp)2Cl)6][PF6]6 (4), [N3P3(OC6H4-But)5(OC6H4CH2CN·Ti(Cp)2Cl)][PF6] (5), [N3P3(OC6H5)5C6H4CH2CN·Ru(Cp)(PPh3)2][PF6] (6), [N3P3(OC6H5)5C6H4CH2CN·Fe(Cp)(dppe)][PF6] (7) and N3P3(OC6H5)5OC5H4N·W(CO)5 (8) were prepared and characterized. As a model, the simple compounds [HOC5H5N·Ti(Cp)2Cl]PF6 (1) and [HOC6H4CH2CN·Ti(Cp)2Cl]PF6 (2) were also prepared and characterized. Pyrolysis of the organometallic cyclic trimers in air yields metallic nanostructured materials, which according to transmission and scanning electron microscopy (TEM/SEM), energy-dispersive X-ray microanalysis (EDX), and IR data, can be formulated as either a metal oxide, metal pyrophosphate or a mixture in some cases, depending on the nature and quantity of the metal, characteristics of the organic spacer and the auxiliary substituent attached to the phosphorus cycle. Atomic force microscopy (AFM) data indicate the formation of small island and striate nanostructures. A plausible formation mechanism which involves the formation of a cyclomatrix is proposed, and the pyrolysis of the organometallic cyclic phosphazene polymer as a new and general method for obtaining metallic nanostructured materials is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have harnessed two reactions catalyzed by the enzyme sortase A and applied them to generate new methods for the purification and site-selective modification of recombinant protein therapeutics.

We utilized native peptide ligation —a well-known function of sortase A— to attach a small molecule drug specifically to the carboxy-terminus of a recombinant protein. By combining this reaction with the unique phase behavior of elastin-like polypeptides, we developed a protocol that produces homogenously-labeled protein-small molecule conjugates using only centrifugation. The same reaction can be used to produce unmodified therapeutic proteins simply by substituting a single reactant. The isolated proteins or protein-small molecule conjugates do not have any exogenous purification tags, eliminating the potential influence of these tags on bioactivity. Because both unmodified and modified proteins are produced by a general process that is the same for any protein of interest and does not require any chromatography, the time, effort, and cost associated with protein purification and modification is greatly reduced.

We also developed an innovative and unique method that attaches a tunable number of drug molecules to any recombinant protein of interest in a site-specific manner. Although the ability of sortase A to carry out native peptide ligation is widely used, we demonstrated that Sortase A is also capable of attaching small molecules to proteins through an isopeptide bond at lysine side chains within a unique amino acid sequence. This reaction —isopeptide ligation— is a new site-specific conjugation method that is orthogonal to all available protein-small conjugation technologies and is the first site-specific conjugation method that attaches the payload to lysine residues. We show that isopeptide ligation can be applied broadly to peptides, proteins, and antibodies using a variety of small molecule cargoes to efficiently generate stable conjugates. We thoroughly assessed the site-selectivity of this reaction using a variety of analytical methods and showed that in many cases the reaction is site-specific for lysines in flexible, disordered regions of the substrate proteins. Finally, we showed that isopeptide ligation can be used to create clinically-relevant antibody-drug conjugates that have potent cytotoxicity towards cancerous cells

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.

In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.

Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.

Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.

Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.

To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.

The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.

This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gold nanoparticles (Au NPs) with diameters ranging between 15 and 150 nm have been synthesised in water. 15 and 30 nm Au NPs were obtained by the Turkevich and Frens method using sodium citrate as both a reducing and stabilising agent at high temperature (Au NPs-citrate), while 60, 90 and 150 nm Au NPs were formed using hydroxylamine-o-sulfonic acid (HOS) as a reducing agent for HAuCl4 at room temperature. This new method using HOS is an extension of the approaches previously reported for producing Au NPs with mean diameters above 40 nm by direct reduction. Functionalised polyethylene glycol-based thiol polymers were used to stabilise the pre-synthesised Au NPs. The nanoparticles obtained were characterised using uv-visible spectroscopy, dynamic light scattering (DLS) and transmission electron microscopy (TEM). Further bioconjugation on 15, 30 and 90 nm PEGylated Au NPs were performed by grafting Bovine Serum Albumin, Transferrin and Apolipoprotein E (ApoE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.

Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.

Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.

Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract

Continuous variable is one of the major data types collected by the survey organizations. It can be incomplete such that the data collectors need to fill in the missingness. Or, it can contain sensitive information which needs protection from re-identification. One of the approaches to protect continuous microdata is to sum them up according to different cells of features. In this thesis, I represents novel methods of multiple imputation (MI) that can be applied to impute missing values and synthesize confidential values for continuous and magnitude data.

The first method is for limiting the disclosure risk of the continuous microdata whose marginal sums are fixed. The motivation for developing such a method comes from the magnitude tables of non-negative integer values in economic surveys. I present approaches based on a mixture of Poisson distributions to describe the multivariate distribution so that the marginals of the synthetic data are guaranteed to sum to the original totals. At the same time, I present methods for assessing disclosure risks in releasing such synthetic magnitude microdata. The illustration on a survey of manufacturing establishments shows that the disclosure risks are low while the information loss is acceptable.

The second method is for releasing synthetic continuous micro data by a nonstandard MI method. Traditionally, MI fits a model on the confidential values and then generates multiple synthetic datasets from this model. Its disclosure risk tends to be high, especially when the original data contain extreme values. I present a nonstandard MI approach conditioned on the protective intervals. Its basic idea is to estimate the model parameters from these intervals rather than the confidential values. The encouraging results of simple simulation studies suggest the potential of this new approach in limiting the posterior disclosure risk.

The third method is for imputing missing values in continuous and categorical variables. It is extended from a hierarchically coupled mixture model with local dependence. However, the new method separates the variables into non-focused (e.g., almost-fully-observed) and focused (e.g., missing-a-lot) ones. The sub-model structure of focused variables is more complex than that of non-focused ones. At the same time, their cluster indicators are linked together by tensor factorization and the focused continuous variables depend locally on non-focused values. The model properties suggest that moving the strongly associated non-focused variables to the side of focused ones can help to improve estimation accuracy, which is examined by several simulation studies. And this method is applied to data from the American Community Survey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: Alcohol consumption is a leading cause of global suffering. The harms caused by alcohol to the individual, their peers and the society in which they live provokes public health concern. Elevated levels of consumption and consequences have been noted in those aged 18-29 years. University students represent a unique subsection of society within this age group. University authorities have attempted to tackle this issue throughout the past decade. However, the issue persists. Thus, the aim of this study is to contribute to the evidence base for policy and practice in relation to alcohol harm reduction among third-level students in Ireland. Methods: A mixed methods approach was employed. A systematic review of the prevalence of hazardous alcohol consumption among university students in Ireland and the United Kingdom from 2002 to 2014 was conducted. In addition, a narrative synthesis of studies of drinking types evidenced among youths in western societies was undertaken. A cross-sectional study focused on university students’ health and lifestyle behaviours with particular reference to alcohol consumption was undertaken using previously validated instruments. Undergraduate students registered to one university in Ireland were recruited using two separate modes; classroom and online. Studies investigated the impact of mode of data collection, the prevalence of hazardous alcohol consumption and resultant adverse consequences for mental health and wellbeing. In addition a study using a Q-methodology approach was undertaken to gain a deeper understanding of the cultural factors influencing current patterns of alcohol consumption. Data were analysed using IBM SPPS statistics 20, Stata 12, MPLUS and PQ Method. Results: The literature review focusing on students’ alcohol consumption found that there has been both an increase in hazardous alcohol consumption among university students and a convergence of male and female drinking patterns throughout the past decade. Updating this research, the thesis found that two-thirds of university students consume alcohol at a hazardous level, detailing the range of adverse consequences reported by university students in Ireland. Finally, the heterogeneous nature of this drinking was described in a narrative synthesis exposing six types of consumption. The succeeding chapters develop this review further by describing three typologies of consumption, two quantitative and one quali-quantilogical. The quantitative typology describes three types of drinking for men (realistic hedonist, responsible conformer and guarded drinker) and four types for women (realistic hedonist, peer-influenced, responsible conformer and guarded drinker). The quali-quantilogical approach describes four types of consumption. These are defined as the ‘guarded drinker’, the ‘calculated hedonist’, the ‘peer-influenced drinker’ and the ‘inevitable binger’. Discussion: The findings of this thesis highlight the scale of the issue and provide up-to-date estimates of alcohol consumption among university students in Ireland. Hazardous alcohol consumption is associated with a range of harms to self and harms to others in proximity to the alcohol consumer. The classification of drinkers into types signal the necessity for university management, health promotion practitioners and public health policy makers to tackle this issue using a multi-faceted approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. There are two issues in using HLPNs - modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Bullying is a specific pattern of repeated victimization explored with great frequency in school-based literature, but receiving little attention within sport. The current study explored the prevalence of bullying in sport, and examined whether bullying experiences were associated with perceptions about relationships with peers and coaches. Method: Adolescent sport team members (n = 359, 64% female) with an average age of 14.47 years (SD = 1.34) completed a pen-and-paper or online questionnaire assessing how frequently they perpetrated or were victimized by bullying during school and sport generally, as well as recent experiences with 16 bullying behaviors on their sport team. Participants also reported on relationships with their coach and teammates. Results: Bullying was less prevalent in sport compared with school, and occurred at a relatively low frequency overall. However, by identifying participants who reported experiencing one or more act of bullying on their team recently, results revealed that those victimized through bullying reported weaker connections with peers, whereas those perpetrating bullying only reported weaker coach relationships. Conclusion: With the underlying message that bullying may occur in adolescent sport through negative teammate interactions, sport researchers should build upon these findings to develop approaches to mitigate peer victimization in sport.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.