906 resultados para Buy and hold method
Resumo:
This study examines the effect of edible coatings, type of oil used, and cooking method on the fat content of commercially available French fries. In contrast to earlier studies that examined laboratory prepared French fries, this study assesses commercially available French fries and cooking oils. This study also measured the fat content in oven baked French fries, comparing the two cooking methods in addition to the comparisons of different coatings’ oil uptake. The findings of this study were that the type of oil used did have a significant impact on the final oil content of the uncoated and seasoned fries. The fries coated in modified food starch and fried in peanut and soy oils had what appeared to be significantly higher oil content than those fried in corn oil or baked, but the difference was not statistically significant. Additionally, fat content in French fries with hydrocollidial coatings that were prepared in corn oil were not significantly different than French fries with the same coating that were baked.
Resumo:
My dissertation consists of three essays. The central theme of these essays is the psychological factors and biases that affect the portfolio allocation decision. The first essay entitled, “Are women more risk-averse than men?” examines the gender difference in risk aversion as revealed by actual investment choices. Using a sample that controls for biases in the level of education and finance knowledge, there is evidence that when individuals have the same level of education, irrespective of their knowledge of finance, women are no more risk-averse than their male counterparts. However, the gender-risk aversion relation is also a function of age, income, wealth, marital status, race/ethnicity and the number of children in the household. The second essay entitled, “Can diversification be learned ?” investigates if investors who have superior investment knowledge are more likely to actively seek diversification benefits and are less prone to allocation biases. Results of cross-sectional analyses suggest that knowledge of finance increases the likelihood that an investor will efficiently allocate his direct investments across the major asset classes; invest in foreign assets; and hold a diversified equity portfolio. However, there is no evidence that investors who are more financially sophisticated make superior allocation decisions in their retirement savings. The final essay entitled, “The demographics of non-participation ”, examines the factors that affect the decision not to hold stocks. The results of probit regression models indicate that when individuals are highly educated, the decision to not participate in the stock market is less related to demographic factors. In particular, when individuals have attained at least a college degree and have advanced knowledge of finance, they are significantly more likely to invest in equities either directly or indirectly through mutual funds or their retirement savings. There is also evidence that the decision not to hold stocks is motivated by short-term market expectations and the most recent investment experience. The findings of these essays should increase the body of research that seeks to reconcile what investors actually do (positive theory) with what traditional theories of finance predict that investors should do (normative theory).
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^
Resumo:
OBJECTIVE: to examine the relationships among reported medical advice, diabetes education, health insurance and health behavior of individuals with diabetes by race/ethnicity and gender. METHOD: Secondary analysis of data (N = 654) for adults ages > or = 21 years with diabetes acquired through the National Health and Nutrition Examination Survey (NHANES) for the years 2007-2008 comparing Black, non-Hispanics (BNH) and Mexican-Americans (MA) with White, non-Hispanics (WNH). The NHANES survey design is a stratified, multistage probability sample of the civilian noninstitutionalized U.S. population. Sample weights were applied in accordance with NHANES specifications using the complex sample module of IBM SPSS version 18. RESULTS: The findings revealed statistical significant differences in reported medical advice given. BNH [OR = 1.83 (1.16, 2.88), p = 0.013] were more likely than WNH to report being told to reduce fat or calories. Similarly, BNH [OR = 2.84 (1.45, 5.59), p = 0.005] were more likely than WNH to report that they were told to increase their physical activity. Mexican-Americans were less likely to self-monitor their blood glucose than WNH [OR = 2.70 (1.66, 4.38), p < 0.001]. There were differences by race/ethnicity for reporting receiving recent diabetes education. Black, non-Hispanics were twice as likely to report receiving diabetes education than WNH [OR = 2.29 (1.36, 3.85), p = 0.004]. Having recent diabetes education increased the likelihood of performing several diabetes self-management behaviors independent of race. CONCLUSIONS: There were significant differences in reported medical advice received for diabetes care by race/ethnicity. The results suggest ethnic variations in patient-provider communication and may be a consequence of their health beliefs, patient-provider communication as well as length of visit and access to healthcare. These findings clearly demonstrate the need for government sponsored programs, with a patient-centered approach, augmenting usual medical care for diabetes. Moreover, the results suggest that public policy is needed to require the provision of diabetes education at least every two years by public health insurance programs and recommend this provision for all private insurance companies
Resumo:
My dissertation consists of three essays. The central theme of these essays is the psychological factors and biases that affect the portfolio allocation decision. The first essay entitled, “Are women more risk-averse than men?” examines the gender difference in risk aversion as revealed by actual investment choices. Using a sample that controls for biases in the level of education and finance knowledge, there is evidence that when individuals have the same level of education, irrespective of their knowledge of finance, women are no more risk-averse than their male counterparts. However, the gender-risk aversion relation is also a function of age, income, wealth, marital status, race/ethnicity and the number of children in the household. The second essay entitled, “Can diversification be learned?” investigates if investors who have superior investment knowledge are more likely to actively seek diversification benefits and are less prone to allocation biases. Results of cross-sectional analyses suggest that knowledge of finance increases the likelihood that an investor will efficiently allocate his direct investments across the major asset classes; invest in foreign assets; and hold a diversified equity portfolio. However, there is no evidence that investors who are more financially sophisticated make superior allocation decisions in their retirement savings. The final essay entitled, “The demographics of non-participation”, examines the factors that affect the decision not to hold stocks. The results of probit regression models indicate that when individuals are highly educated, the decision to not participate in the stock market is less related to demographic factors. In particular, when individuals have attained at least a college degree and have advanced knowledge of finance, they are significantly more likely to invest in equities either directly or indirectly through mutual funds or their retirement savings. There is also evidence that the decision not to hold stocks is motivated by short-term market expectations and the most recent investment experience. The findings of these essays should increase the body of research that seeks to reconcile what investors actually do (positive theory) with what traditional theories of finance predict that investors should do (normative theory).
Resumo:
The potential of solid phase microextraction (SPME) in the analysis of explosives is demonstrated. A sensitive, rapid, solventless and inexpensive method for the analysis of explosives and explosive odors from solid and liquid samples has been optimized using SPME followed by HPLC and GC/ECD. SPME involves the extraction of the organic components in debris samples into sorbent-coated silica fibers, which can be transferred directly to the injector of a gas chromatograph. SPME/HPLC requires a special desorption apparatus to elute the extracted analyte onto the column at high pressure. Re suits for use of GC[ECD is presented and compared to the results gathered by using HPLC analysis. The relative effects of controllable variables including fiber chemistry, adsorption and desorption temperature, extraction time, and desorption time have been optimized for various high explosives.
Resumo:
A comprehensive investigation of sensitive ecosystems in South Florida with the main goal of determining the identity, spatial distribution, and sources of both organic biocides and trace elements in different environmental compartments is reported. This study presents the development and validation of a fractionation and isolation method of twelve polar acidic herbicides commonly applied in the vicinity of the study areas, including e.g. 2,4-D, MCPA, dichlorprop, mecroprop, picloram in surface water. Solid phase extraction (SPE) was used to isolate the analytes from abiotic matrices containing large amounts of dissolved organic material. Atmospheric-pressure ionization (API) with electrospray ionization in negative mode (ESP-) in a Quadrupole Ion Trap mass spectrometer was used to perform the characterization of the herbicides of interest. The application of Laser Ablation-ICP-MS methodology in the analysis of soils and sediments is reported in this study. The analytical performance of the method was evaluated on certified standards and real soil and sediment samples. Residential soils were analyzed to evaluate feasibility of using the powerful technique as a routine and rapid method to monitor potential contaminated sites. Forty eight sediments were also collected from semi pristine areas in South Florida to conduct screening of baseline levels of bioavailable elements in support of risk evaluation. The LA-ICP-MS data were used to perform a statistical evaluation of the elemental composition as a tool for environmental forensics. A LA-ICP-MS protocol was also developed and optimized for the elemental analysis of a wide range of elements in polymeric filters containing atmospheric dust. A quantitative strategy based on internal and external standards allowed for a rapid determination of airborne trace elements in filters containing both contemporary African dust and local dust emissions. These distributions were used to qualitative and quantitative assess differences of composition and to establish provenance and fluxes to protected regional ecosystems such as coral reefs and national parks.
Resumo:
Produced water is a by-product of offshore oil and gas production, and is released in large volumes when platforms are actively processing crude oil. Some pollutants are not typically removed by conventional oil/water separation methods and are discharged with produced water. Oil and grease can be found dispersed in produced water in the form of tiny droplets, and polycyclic aromatic hydrocarbons (PAHs) are commonly found dissolved in produced water. Both can have acute and chronic toxic effects in marine environments even at low exposure levels. The analysis of the dissolved and dispersed phases are a priority, but effort is required to meet the necessary detection limits. There are several methods for the analysis of produced water for dispersed oil and dissolved PAHs, all of which have advantages and disadvantages. In this work, EPA Method 1664 and APHA Method 5520 C for the determination of oil and grease will be examined and compared. For the detection of PAHs, EPA Method 525 and PAH MIPs will be compared, and results evaluated. APHA Method 5520 C Partition-Infrared Method is a liquid-liquid extraction procedure with IR determination of oil and grease. For analysis on spiked samples of artificial seawater, extraction efficiency ranged from 85 – 97%. Linearity was achieved in the range of 5 – 500 mg/L. This is a single-wavelength method and is unsuitable for quantification of aromatics and other compounds that lack sp³-hybridized carbon atoms. EPA Method 1664 is the liquid-liquid extraction of oil and grease from water samples followed by gravimetric determination. When distilled water spiked with reference oil was extracted by this procedure, extraction efficiency ranged from 28.4 – 86.2%, and %RSD ranged from 7.68 – 38.0%. EPA Method 525 uses solid phase extraction with analysis by GC-MS, and was performed on distilled water and water from St. John’s Harbour, all spiked with naphthalene, fluorene, phenanthrene, and pyrene. The limits of detection in harbour water were 0.144, 3.82, 0.119, and 0.153 g/L respectively. Linearity was obtained in the range of 0.5-10 g/L, and %RSD ranged from 0.36% (fluorene) to 46% (pyrene). Molecularly imprinted polymers (MIPs) are sorbent materials made selective by polymerizing functional monomers and crosslinkers in the presence of a template molecule, usually the analytes of interest or related compounds. They can adsorb and concentrate PAHs from aqueous environments and are combined with methods of analysis including GC-MS, LC-UV-Vis, and desorption electrospray ionization (DESI)- MS. This work examines MIP-based methods as well as those methods previously mentioned which are currently used by the oil and gas industry and government environmental agencies. MIPs are shown to give results consistent with other methods, and are a low-cost alternative improving ease, throughput, and sensitivity. PAH MIPs were used to determine naphthalene spiked into ASTM artificial seawater, as well as produced water from an offshore oil and gas operation. Linearity was achieved in the range studied (0.5 – 5 mg/L) for both matrices, with R² = 0.936 for seawater and R² = 0.819 for produced water. The %RSD for seawater ranged from 6.58 – 50.5% and for produced water, from 8.19 – 79.6%.
Resumo:
Based on the quantitative study of diatoms and radiolarians, summer sea-surface temperature (SSST) and sea ice distribution were estimated from 122 sediment core localities in the Atlantic, Indian and Pacific sectors of the Southern Ocean to reconstruct the last glacial environment at the EPILOG (19.5-16.0 ka or 23 000-19 000 cal yr. B.P.) time-slice. The statistical methods applied include the Imbrie and Kipp Method, the Modern Analog Technique and the General Additive Model. Summer SSTs reveal greater surface-water cooling than reconstructed by CLIMAP (Geol. Soc. Am. Map Chart. Ser. MC-36 (1981) 1), reaching a maximum (4-5 °C) in the present Subantarctic Zone of the Atlantic and Indian sector. The reconstruction of maximum winter sea ice (WSI) extent is in accordance with CLIMAP, showing an expansion of the WSI field by around 100% compared to the present. Although only limited information is available, the data clearly show that CLIMAP strongly overestimated the glacial summer sea ice extent. As a result of the northward expansion of Antarctic cold waters by 5-10° in latitude and a relatively small displacement of the Subtropical Front, thermal gradients were steepened during the last glacial in the northern zone of the Southern Ocean. Such reconstruction may, however, be inapposite for the Pacific sector. The few data available indicate reduced cooling in the southern Pacific and give suggestion for a non-uniform cooling of the glacial Southern Ocean.
Resumo:
Quantification of the lipid content in liposomal adjuvants for subunit vaccine formulation is of extreme importance, since this concentration impacts both efficacy and stability. In this paper, we outline a high performance liquid chromatography-evaporative light scattering detector (HPLC-ELSD) method that allows for the rapid and simultaneous quantification of lipid concentrations within liposomal systems prepared by three liposomal manufacturing techniques (lipid film hydration, high shear mixing, and microfluidics). The ELSD system was used to quantify four lipids: 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC), cholesterol, dimethyldioctadecylammonium (DDA) bromide, and D-(+)-trehalose 6,6′-dibehenate (TDB). The developed method offers rapidity, high sensitivity, direct linearity, and a good consistency on the responses (R2 > 0.993 for the four lipids tested). The corresponding limit of detection (LOD) and limit of quantification (LOQ) were 0.11 and 0.36 mg/mL (DMPC), 0.02 and 0.80 mg/mL (cholesterol), 0.06 and 0.20 mg/mL (DDA), and 0.05 and 0.16 mg/mL (TDB), respectively. HPLC-ELSD was shown to be a rapid and effective method for the quantification of lipids within liposome formulations without the need for lipid extraction processes.
Resumo:
We present a theoretical description of the generation of ultra-short, high-energy pulses in two laser cavities driven by periodic spectral filtering or dispersion management. Critical in driving the intra-cavity dynamics is the nontrivial phase profiles generated and their periodic modification from either spectral filtering or dispersion management. For laser cavities with a spectral filter, the theory gives a simple geometrical description of the intra-cavity dynamics and provides a simple and efficient method for optimizing the laser cavity performance. In the dispersion managed cavity, analysis shows the generated self-similar behavior to be governed by the porous media equation with a rapidly-varying, mean-zero diffusion coefficient whose solution is the well-known Barenblatt similarity solution with parabolic profile.
Resumo:
The cyclic phosphazene trimers [N3P3(OC6H5)5OC5H4N·Ti(Cp)2Cl][PF6] (3), [N3P3(OC6H4CH2CN·Ti(Cp)2Cl)6][PF6]6 (4), [N3P3(OC6H4-But)5(OC6H4CH2CN·Ti(Cp)2Cl)][PF6] (5), [N3P3(OC6H5)5C6H4CH2CN·Ru(Cp)(PPh3)2][PF6] (6), [N3P3(OC6H5)5C6H4CH2CN·Fe(Cp)(dppe)][PF6] (7) and N3P3(OC6H5)5OC5H4N·W(CO)5 (8) were prepared and characterized. As a model, the simple compounds [HOC5H5N·Ti(Cp)2Cl]PF6 (1) and [HOC6H4CH2CN·Ti(Cp)2Cl]PF6 (2) were also prepared and characterized. Pyrolysis of the organometallic cyclic trimers in air yields metallic nanostructured materials, which according to transmission and scanning electron microscopy (TEM/SEM), energy-dispersive X-ray microanalysis (EDX), and IR data, can be formulated as either a metal oxide, metal pyrophosphate or a mixture in some cases, depending on the nature and quantity of the metal, characteristics of the organic spacer and the auxiliary substituent attached to the phosphorus cycle. Atomic force microscopy (AFM) data indicate the formation of small island and striate nanostructures. A plausible formation mechanism which involves the formation of a cyclomatrix is proposed, and the pyrolysis of the organometallic cyclic phosphazene polymer as a new and general method for obtaining metallic nanostructured materials is discussed.
Resumo:
We have harnessed two reactions catalyzed by the enzyme sortase A and applied them to generate new methods for the purification and site-selective modification of recombinant protein therapeutics.
We utilized native peptide ligation —a well-known function of sortase A— to attach a small molecule drug specifically to the carboxy-terminus of a recombinant protein. By combining this reaction with the unique phase behavior of elastin-like polypeptides, we developed a protocol that produces homogenously-labeled protein-small molecule conjugates using only centrifugation. The same reaction can be used to produce unmodified therapeutic proteins simply by substituting a single reactant. The isolated proteins or protein-small molecule conjugates do not have any exogenous purification tags, eliminating the potential influence of these tags on bioactivity. Because both unmodified and modified proteins are produced by a general process that is the same for any protein of interest and does not require any chromatography, the time, effort, and cost associated with protein purification and modification is greatly reduced.
We also developed an innovative and unique method that attaches a tunable number of drug molecules to any recombinant protein of interest in a site-specific manner. Although the ability of sortase A to carry out native peptide ligation is widely used, we demonstrated that Sortase A is also capable of attaching small molecules to proteins through an isopeptide bond at lysine side chains within a unique amino acid sequence. This reaction —isopeptide ligation— is a new site-specific conjugation method that is orthogonal to all available protein-small conjugation technologies and is the first site-specific conjugation method that attaches the payload to lysine residues. We show that isopeptide ligation can be applied broadly to peptides, proteins, and antibodies using a variety of small molecule cargoes to efficiently generate stable conjugates. We thoroughly assessed the site-selectivity of this reaction using a variety of analytical methods and showed that in many cases the reaction is site-specific for lysines in flexible, disordered regions of the substrate proteins. Finally, we showed that isopeptide ligation can be used to create clinically-relevant antibody-drug conjugates that have potent cytotoxicity towards cancerous cells
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.