9 resultados para Theory of proportion and its application
em Digital Commons at Florida International University
Resumo:
Metagenomics is the culture-independent study of genetic material obtained directly from environmental samples. It has become a realistic approach to understanding microbial communities thanks to advances in high-throughput DNA sequencing technologies over the past decade. Current research has shown that different sites of the human body house varied bacterial communities. There is a strong correlation between an individual’s microbial community profile at a given site and disease. Metagenomics is being applied more often as a means of comparing microbial profiles in biomedical studies. The analysis of the data collected using metagenomics can be quite challenging and there exist a plethora of tools for interpreting the results. An automatic analytical workflow for metagenomic analyses has been implemented and tested using synthetic datasets of varying quality. It is able to accurately classify bacteria by taxa and correctly estimate the richness and diversity of each set. The workflow was then applied to the study of the airways microbiome in Chronic Obstructive Pulmonary Disease (COPD). COPD is a progressive lung disease resulting in narrowing of the airways and restricted airflow. Despite being the third leading cause of death in the United States, little is known about the differences in the lung microbial community profiles of healthy individuals and COPD patients. Bronchoalveolar lavage (BAL) samples were collected from COPD patients, active or ex-smokers, and never smokers and sequenced by 454 pyrosequencing. A total of 56 individuals were recruited for the study. Substantial colonization of the lungs was found in all subjects and differentially abundant genera in each group were identified. These discoveries are promising and may further our understanding of how the structure of the lung microbiome is modified as COPD progresses. It is also anticipated that the results will eventually lead to improved treatments for COPD.
Resumo:
The organizational authority of the Papacy in the Roman Catholic Church and the permanent membership of the UN Security Council are unique from institutions that are commonly compared with the UN, like the Concert of Europe and the League of Nations, in that these institutional organs possessed strong authoritative and veto powers. Both organs also owe their strong authority during their founding to a need for stability: The Papacy after the crippling of Western Roman Empire and the P-5 to deal with the insecurities of the post-WWII world. While the P-5 still possesses similar authoritative powers within the Council as it did after WWII, the historical authoritative powers of the Papacy within the Church was debilitated to such a degree that by the time of the Reformation in Europe, condemnations of practices within the Church itself were not effective. This paper will analyze major challenges to the authoritative powers of the Papacy, from the crowning of Charlemagne to the beginning of the Reformation, and compare the analysis to challenges affecting the authoritative powers of the P-5 since its creation. From research conducted thus far, I hypothesize that common themes affecting the authoritative powers of the P-5 and the Papacy would include: major changes in the institutions organization (i.e. the Avignon Papacy and Japan’s bid to become a permanent member); the decline in power of actors supporting the institutional organ (i.e. the Holy Roman Empire and the P-5 members); and ideological clashes affecting the institution’s normative power (i.e. the Great Western Schism and Cold War politics).
Resumo:
The organizational authority of the Papacy in the Roman Catholic Church and the permanent membership of the UN Security Council are unique from institutions that are commonly compared with the UN, like the Concert of Europe and the League of Nations, in that these institutional organs possessed strong authoritative and veto powers. Both organs also owe their strong authority during their founding to a need for stability: The Papacy after the crippling of Western Roman Empire and the P-5 to deal with the insecurities of the post-WWII world. While the P-5 still possesses similar authoritative powers within the Council as it did after WWII, the historical authoritative powers of the Papacy within the Church was debilitated to such a degree that by the time of the Reformation in Europe, condemnations of practices within the Church itself were not effective. This paper will analyze major challenges to the authoritative powers of the Papacy, from the crowning of Charlemagne to the beginning of the Reformation, and compare the analysis to challenges affecting the authoritative powers of the P-5 since its creation. From research conducted thus far, I hypothesize that common themes affecting the authoritative powers of the P-5 and the Papacy would include: major changes in the institutions organization (i.e. the Avignon Papacy and Japan’s bid to become a permanent member); the decline in power of actors supporting the institutional organ (i.e. the Holy Roman Empire and the P-5 members); and ideological clashes affecting the institution’s normative power (i.e. the Great Western Schism and Cold War politics).
Resumo:
This research is part of continued efforts to correlate the hydrology of East Fork Poplar Creek (EFPC) and Bear Creek (BC) with the long term distribution of mercury within the overland, subsurface, and river sub-domains. The main objective of this study was to add a sedimentation module (ECO Lab) capable of simulating the reactive transport mercury exchange mechanisms within sediments and porewater throughout the watershed. The enhanced model was then applied to a Total Maximum Daily Load (TMDL) mercury analysis for EFPC. That application used historical precipitation, groundwater levels, river discharges, and mercury concentrations data that were retrieved from government databases and input to the model. The model was executed to reduce computational time, predict flow discharges, total mercury concentration, flow duration and mercury mass rate curves at key monitoring stations under various hydrological and environmental conditions and scenarios. The computational results provided insight on the relationship between discharges and mercury mass rate curves at various stations throughout EFPC, which is important to best understand and support the management mercury contamination and remediation efforts within EFPC.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA's behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
Resumo:
Since the 1950s, the theory of deterministic and nondeterministic finite automata (DFAs and NFAs, respectively) has been a cornerstone of theoretical computer science. In this dissertation, our main object of study is minimal NFAs. In contrast with minimal DFAs, minimal NFAs are computationally challenging: first, there can be more than one minimal NFA recognizing a given language; second, the problem of converting an NFA to a minimal equivalent NFA is NP-hard, even for NFAs over a unary alphabet. Our study is based on the development of two main theories, inductive bases and partials, which in combination form the foundation for an incremental algorithm, ibas, to find minimal NFAs. An inductive basis is a collection of languages with the property that it can generate (through union) each of the left quotients of its elements. We prove a fundamental characterization theorem which says that a language can be recognized by an n-state NFA if and only if it can be generated by an n-element inductive basis. A partial is an incompletely-specified language. We say that an NFA recognizes a partial if its language extends the partial, meaning that the NFA’s behavior is unconstrained on unspecified strings; it follows that a minimal NFA for a partial is also minimal for its language. We therefore direct our attention to minimal NFAs recognizing a given partial. Combining inductive bases and partials, we generalize our characterization theorem, showing that a partial can be recognized by an n-state NFA if and only if it can be generated by an n-element partial inductive basis. We apply our theory to develop and implement ibas, an incremental algorithm that finds minimal partial inductive bases generating a given partial. In the case of unary languages, ibas can often find minimal NFAs of up to 10 states in about an hour of computing time; with brute-force search this would require many trillions of years.
Resumo:
Background: Blacks have a higher incidence of diabetes and its related complications. Self-rated health (SRH) and perceived stress indicators are associated with chronic diseases. The aim of this study was to examine the associations between SRH, perceived stress and diabetes status among two Black ethnicities. Materials and Methods: The cross-sectional study included 258 Haitian Americans and 249 African Americans with (n = 240) and without type 2 diabetes (n = 267) (N = 507). Recruitment was performed by community outreach. Results: Haitian-Americans were less likely to report ‘fair to poor’ health as compared to African Americans [OR=0.58 (95% CI: 0.35, 0.95), P = 0.032]; yet, Haitian Americans had greater perceived stress than African Americans (P = 0.002). Having diabetes was associated with ‘fair to poor’ SRH [OR=3.14 (95% CI: 2.09, 4.72),P < 0.001] but not perceived stress (P = 0.072). Haitian-Americans (P = 0.023), females (P = 0.003) and those participants having ‘poor or fair’ SRH (P < 0.001) were positively associated with perceived stress (Nagelkerke R2=0.151). Conclusion: Perceived stress associated with ‘poor or fair’ SRH suggests that screening for perceived stress should be considered part of routine medical care; albeit, further studies are required to confirm our results. The findings support the need for treatment plans that are patient-centered and culturally relevant and that address psychosocial issues.