9 resultados para normal coordinate analysis
em Digital Commons at Florida International University
Resumo:
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^
Resumo:
Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^
Resumo:
The microarray technology provides a high-throughput technique to study gene expression. Microarrays can help us diagnose different types of cancers, understand biological processes, assess host responses to drugs and pathogens, find markers for specific diseases, and much more. Microarray experiments generate large amounts of data. Thus, effective data processing and analysis are critical for making reliable inferences from the data. ^ The first part of dissertation addresses the problem of finding an optimal set of genes (biomarkers) to classify a set of samples as diseased or normal. Three statistical gene selection methods (GS, GS-NR, and GS-PCA) were developed to identify a set of genes that best differentiate between samples. A comparative study on different classification tools was performed and the best combinations of gene selection and classifiers for multi-class cancer classification were identified. For most of the benchmarking cancer data sets, the gene selection method proposed in this dissertation, GS, outperformed other gene selection methods. The classifiers based on Random Forests, neural network ensembles, and K-nearest neighbor (KNN) showed consistently god performance. A striking commonality among these classifiers is that they all use a committee-based approach, suggesting that ensemble classification methods are superior. ^ The same biological problem may be studied at different research labs and/or performed using different lab protocols or samples. In such situations, it is important to combine results from these efforts. The second part of the dissertation addresses the problem of pooling the results from different independent experiments to obtain improved results. Four statistical pooling techniques (Fisher inverse chi-square method, Logit method. Stouffer's Z transform method, and Liptak-Stouffer weighted Z-method) were investigated in this dissertation. These pooling techniques were applied to the problem of identifying cell cycle-regulated genes in two different yeast species. As a result, improved sets of cell cycle-regulated genes were identified. The last part of dissertation explores the effectiveness of wavelet data transforms for the task of clustering. Discrete wavelet transforms, with an appropriate choice of wavelet bases, were shown to be effective in producing clusters that were biologically more meaningful. ^
Resumo:
In this study we have identified key genes that are critical in development of astrocytic tumors. Meta-analysis of microarray studies which compared normal tissue to astrocytoma revealed a set of 646 differentially expressed genes in the majority of astrocytoma. Reverse engineering of these 646 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I–IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. All of these genes were up-regulated, except MPP2 (down regulated). These 10 genes were able to predict tumor status with 96–100% confidence when using logistic regression, cross validation, and the support vector machine analysis. Markov genes interact with NFkβ, ERK, MAPK, VEGF, growth hormone and collagen to produce a network whose top biological functions are cancer, neurological disease, and cellular movement. Three of the 10 genes - EGFR, COL4A1, and CDK4, in particular, seemed to be potential ‘hubs of activity’. Modified expression of these 10 Markov Blanket genes increases lifetime risk of developing glioblastoma compared to the normal population. The glioblastoma risk estimates were dramatically increased with joint effects of 4 or more than 4 Markov Blanket genes. Joint interaction effects of 4, 5, 6, 7, 8, 9 or 10 Markov Blanket genes produced 9, 13, 20.9, 26.7, 52.8, 53.2, 78.1 or 85.9%, respectively, increase in lifetime risk of developing glioblastoma compared to normal population. In summary, it appears that modified expression of several ‘key genes’ may be required for the development of glioblastoma. Further studies are needed to validate these ‘key genes’ as useful tools for early detection and novel therapeutic options for these tumors.
Resumo:
C-reactive protein (CRP), a normally occurring human plasma protein may become elevated as much as 1,000 fold during disease states involving acute inflammation or tissue damage. Through its binding to phosphorylcholine in the presence of calcium, CRP has been shown to potentiate the activation of complement, stimulate phagocytosis and opsonize certain microorganisms. Utilizing a flow cytometric functional ligand binding assay I have demonstrated that a monocyte population in human peripheral blood and specific human-derived myelomonocytic cell lines reproducibly bind an evolutionarily conserved conformational pentraxin epitope on human CRP through a mechanism that does not involve its ligand, phosphorylcholine. ^ A variety of cell lines at different stages of differentiation were examined. The monocytic cell line, THP-1, bound the most CRP followed by U937 and KG-1a cells. The HL-60 cell line was induced towards either the granulocyte or monocyte pathway with DMSO or PMA, respectively. Untreated HL-60 cells or DMSO-treated cells did not bind CRP while cells treated with PMA showed increased binding of CRP, similar to U-937 cells. T cell and B-cell derived lines were negative. ^ Inhibition studies with Limulin and human SAP demonstrated that the binding site is a conserved pentraxin epitope. The calcium requirement necessary for binding to occur indicated that the cells recognize a conformational form of CRP. Phosphorylcholine did not inhibit the reaction therefore the possibility that CRP had bound to damaged membranes with exposed PC sites was discounted. ^ A study of 81 normal donors using flow cytometry demonstrated that a majority of peripheral blood monocytes (67.9 ± 1.3, mean ± sem) bound CRP. The percentage of binding was normally distributed and not affected by gender, age or ethnicity. Whole blood obtained from donors representing a variety of disease states showed a significant reduction in the level of CRP bound by monocytes in those donors classified with infection, inflammation or cancer. This reduction in monocyte populations binding CRP did not correlate with the concentration of plasma CRP. ^ The ability of monocytes to specifically bind CRP combined with the binding reactivity of the protein itself to a variety of phosphorylcholine containing substances may represent an important bridge between innate and adaptive immunity. ^
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
The Convention on Biodiversity (CBD) was created in 1992 to coordinate global governments to protect biological resources. The CBD has three goals: protection of biodiversity, achievement of sustainable use of biodiversity and facilitation of equitable sharing of the benefits of biological resources. The goal of protecting biological resources has remained both controversial and difficult to implement. This study focused more on the goal of biodiversity protection. The research was designed to examine how globally constructed environmental policies get adapted by national governments and then passed down to local levels where actual implementation takes place. Effectiveness of such policies depends on the extent of actual implementation at local levels. Therefore, compliance was divided and examined at three levels: global, national and local. The study then developed various criteria to measure compliance at these levels. Both qualitative and quantitative methods were used to analyze compliance and implementation. The study was guided by three questions broadly examining critical factors that most influence the implementation of biodiversity protection policies at the global, national and local levels. Findings show that despite an overall biodiversity deficit of 0.9 hectares per person, global compliance with the CBD goals is currently at 35%. Compliance is lowest at local levels at 14%, it is slightly better at national level at 50%, and much better at the international level 64%. Compliance appears higher at both national and international levels because compliance here is paper work based and policy formulation. If implementation at local levels continues to produce this low compliance, overall conservation outcomes can only get worse than what it is at present. There are numerous weaknesses and capacity challenges countries are yet to address in their plans. In order to increase local level compliance, the study recommends a set of robust policies that build local capacity, incentivize local resource owners, and implement biodiversity protection programs that are akin to local needs and aspirations.^
Resumo:
The Convention on Biodiversity (CBD) was created in 1992 to coordinate global governments to protect biological resources. The CBD has three goals: protection of biodiversity, achievement of sustainable use of biodiversity and facilitation of equitable sharing of the benefits of biological resources. The goal of protecting biological resources has remained both controversial and difficult to implement. This study focused more on the goal of biodiversity protection. The research was designed to examine how globally constructed environmental policies get adapted by national governments and then passed down to local levels where actual implementation takes place. Effectiveness of such policies depends on the extent of actual implementation at local levels. Therefore, compliance was divided and examined at three levels: global, national and local. The study then developed various criteria to measure compliance at these levels. Both qualitative and quantitative methods were used to analyze compliance and implementation. The study was guided by three questions broadly examining critical factors that most influence the implementation of biodiversity protection policies at the global, national and local levels. Findings show that despite an overall biodiversity deficit of 0.9 hectares per person, global compliance with the CBD goals is currently at 35%. Compliance is lowest at local levels at 14%, it is slightly better at national level at 50%, and much better at the international level 64%. Compliance appears higher at both national and international levels because compliance here is paper work based and policy formulation. If implementation at local levels continues to produce this low compliance, overall conservation outcomes can only get worse than what it is at present. There are numerous weaknesses and capacity challenges countries are yet to address in their plans. In order to increase local level compliance, the study recommends a set of robust policies that build local capacity, incentivize local resource owners, and implement biodiversity protection programs that are akin to local needs and aspirations.
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.