965 resultados para concept analysis
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
The Advisory Committee on Immunization Practices (ACIP) develops written recommendations for the routine administration of vaccines to children and adults in the U.S. civilian population. The ACIP is the only entity in the federal government that makes such recommendations. ACIP elaborates on selection of its members and rules out concerns regarding its integrity, but fails to provide information about the importance of economic analysis in vaccine selection. ACIP recommendations can have large health and economic consequences. Emphasis on economic evaluation in health is a likely response to severe pressures of the federal and state health budget. This study describes the economic aspects considered by the ACIP while sanctioning a vaccine, and reviews the economic evaluations (our economic data) provided for vaccine deliberations. A five year study period from 2004 to 2009 is adopted. Publicly available data from ACIP web database is used. Drummond et al. (2005) checklist serves as a guide to assess the quality of economic evaluations presented. Drummond et al.'s checklist is a comprehensive hence it is unrealistic to expect every ACIP deliberation to meet all of their criteria. For practical purposes we have selected seven criteria that we judge to be significant criteria provided by Drummond et al. Twenty-four data points were obtained in a five year period. Our results show that out of the total twenty-four data point‘s (economic evaluations) only five data points received a score of six; that is six items on the list of seven were met. None of the data points received a perfect score of seven. Seven of the twenty-four data points received a score of five. A minimum of a two score was received by only one of the economic analyses. The type of economic evaluation along with the model criteria and ICER/QALY criteria met at 0.875 (87.5%). These three criteria were met at the highest rate among the seven criteria studied. Our study findings demonstrate that the perspective criteria met at 0.583 (58.3%) followed by source and sensitivity analysis criteria both tied at 0.541 (54.1%). The discount factor was met at 0.250 (25.0%).^ Economic analysis is not a novel concept to the ACIP. It has been practiced and presented at these meetings on a regular basis for more than five years. ACIP‘s stated goal is to utilize good quality epidemiologic, clinical and economic analyses to help policy makers choose among alternatives presented and thus achieve a better informed decision. As seen in our study the economic analyses over the years are inconsistent. The large variability coupled with lack of a standardized format may compromise the utility of the economic information for decision-making. While making recommendations, the ACIP takes into account all available information about a vaccine. Thus it is vital that standardized high quality economic information is provided at the ACIP meetings. Our study may provide a call for the ACIP to further investigate deficiencies within the system and thereby to improve economic evaluation data presented. ^
Resumo:
This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^
Resumo:
Proton therapy is growing increasingly popular due to its superior dose characteristics compared to conventional photon therapy. Protons travel a finite range in the patient body and stop, thereby delivering no dose beyond their range. However, because the range of a proton beam is heavily dependent on the tissue density along its beam path, uncertainties in patient setup position and inherent range calculation can degrade thedose distribution significantly. Despite these challenges that are unique to proton therapy, current management of the uncertainties during treatment planning of proton therapy has been similar to that of conventional photon therapy. The goal of this dissertation research was to develop a treatment planning method and a planevaluation method that address proton-specific issues regarding setup and range uncertainties. Treatment plan designing method adapted to proton therapy: Currently, for proton therapy using a scanning beam delivery system, setup uncertainties are largely accounted for by geometrically expanding a clinical target volume (CTV) to a planning target volume (PTV). However, a PTV alone cannot adequately account for range uncertainties coupled to misaligned patient anatomy in the beam path since it does not account for the change in tissue density. In order to remedy this problem, we proposed a beam-specific PTV (bsPTV) that accounts for the change in tissue density along the beam path due to the uncertainties. Our proposed method was successfully implemented, and its superiority over the conventional PTV was shown through a controlled experiment.. Furthermore, we have shown that the bsPTV concept can be incorporated into beam angle optimization for better target coverage and normal tissue sparing for a selected lung cancer patient. Treatment plan evaluation method adapted to proton therapy: The dose-volume histogram of the clinical target volume (CTV) or any other volumes of interest at the time of planning does not represent the most probable dosimetric outcome of a given plan as it does not include the uncertainties mentioned earlier. Currently, the PTV is used as a surrogate of the CTV’s worst case scenario for target dose estimation. However, because proton dose distributions are subject to change under these uncertainties, the validity of the PTV analysis method is questionable. In order to remedy this problem, we proposed the use of statistical parameters to quantify uncertainties on both the dose-volume histogram and dose distribution directly. The robust plan analysis tool was successfully implemented to compute both the expectation value and its standard deviation of dosimetric parameters of a treatment plan under the uncertainties. For 15 lung cancer patients, the proposed method was used to quantify the dosimetric difference between the nominal situation and its expected value under the uncertainties.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
The early Eocene epoch was characterized by extreme global warmth, which in terrestrial settings was characterized by an expansion of near-tropical vegetation belts into the high latitudes. During the middle to late Eocene, global cooling caused the retreat of tropical vegetation to lower latitudes. In high-latitude settings, near-tropical vegetation was replaced by temperate floras. This floral change has recently been traced as far south as Antarctica, where along the Wilkes Land margin paratropical forests thrived during the early Eocene and temperate Nothofagus forests developed during the middle Eocene. Here we provide both qualitative and quantitative palynological data for this floral turnover based on a sporomorph record recovered at Integrated Ocean Drilling Program (IODP) Site U1356 off the Wilkes Land margin. Following the nearest living relative concept and based on a comparison with modern vegetation types, we examine the structure and diversity patterns of the Eocene vegetation along the Wilkes Land margin. Our results indicate that the early Eocene forests along the Wilkes Land margin were characterized by a diverse canopy composed of plants that today occur in tropical settings; their richness pattern was similar to that of present-day forests from New Caledonia. The middle Eocene forests were characterized by a canopy dominated by Nothofagus and exhibited richness patterns similar to modern Nothofagus forests from New Zealand.
Resumo:
Organic petrologic and geochemical analyses were performed on modern and Quaternary organic carbon-poor deep sea sediments from the Equatorial Atlantic. The study area covers depositional settings from the West African margin (ODP Site 959) through the Equatorial Divergence (ODP Site 663) to the pelagic Equatorial Atlantic. Response of organic matter (OM) deposition to Quaternary climatic cycles is discussed for ODP Sites 959 and 663. The results are finally compared to a concept established for fossil deep sea environments [Littke and Sachsenhofer, 1994 doi:10.1021/ef00048a041]. Organic geochemical results obtained from Equatorial Atlantic deep sea deposits provide new aspects on the distribution of sedimentary OM in response to continental distance, atmospheric and oceanographic circulation, and depositional processes controlling sedimentation under modern and past glacial-interglacial conditions. The inventory of macerals in deep sea deposits is limited due to mechanical breakdown of particles, degree of oxidation, and selective remineralization of labile (mostly marine) OM. Nevertheless, organic petrology has a great potential for paleoenvironmental studies, especially as a proxy to assess quantitative information on the relative abundance of marine vs. terrigenous OM. Discrepancies between quantitative data obtained from microscopic and isotopic (delta13Corg) analyses were observed depending on the stratigraphic level and depositional setting. Strongest offset between both records was found close to the continent and during glacial periods, suggesting a coupling with wind-born terrigenous OM from central Africa. Since African dust source areas are covered by C4 grass plants, supply of isotopically heavy OM is assumed to have caused the difference between microscopic and isotopic records.
Resumo:
The spatial and temporal dynamics of seagrasses have been well studied at the leaf to patch scales, however, the link to large spatial extent landscape and population dynamics is still unresolved in seagrass ecology. Traditional remote sensing approaches have lacked the temporal resolution and consistency to appropriately address this issue. This study uses two high temporal resolution time-series of thematic seagrass cover maps to examine the spatial and temporal dynamics of seagrass at both an inter- and intra-annual time scales, one of the first globally to do so at this scale. Previous work by the authors developed an object-based approach to map seagrass cover level distribution from a long term archive of Landsat TM and ETM+ images on the Eastern Banks (~200 km**2), Moreton Bay, Australia. In this work a range of trend and time-series analysis methods are demonstrated for a time-series of 23 annual maps from 1988 to 2010 and a time-series of 16 monthly maps during 2008-2010. Significant new insight was presented regarding the inter- and intra-annual dynamics of seagrass persistence over time, seagrass cover level variability, seagrass cover level trajectory, and change in area of seagrass and cover levels over time. Overall we found that there was no significant decline in total seagrass area on the Eastern Banks, but there was a significant decline in seagrass cover level condition. A case study of two smaller communities within the Eastern Banks that experienced a decline in both overall seagrass area and condition are examined in detail, highlighting possible differences in environmental and process drivers. We demonstrate how trend and time-series analysis enabled seagrass distribution to be appropriately assessed in context of its spatial and temporal history and provides the ability to not only quantify change, but also describe the type of change. We also demonstrate the potential use of time-series analysis products to investigate seagrass growth and decline as well as the processes that drive it. This study demonstrates clear benefits over traditional seagrass mapping and monitoring approaches, and provides a proof of concept for the use of trend and time-series analysis of remotely sensed seagrass products to benefit current endeavours in seagrass ecology.
Resumo:
Deglacial reefs from Tahiti (IODP 310) feature a co-occurrence of zooxanthellate corals with microbialites that compose up to 80 vol% of the reef framework. The notion that microbialites tend to form in more nutrient-rich environments has previously led to the concept that such encrustations are considerably younger than the coral framework, and that they have formed in deeper storeys of the reef edifice, or that they represent severe disturbances of the reef ecosystem. As indicated by their repetitive interbedding with coralline red algae, the microbialites of this reef succession of Tahiti, however, formed immediately after coral growth under photic conditions. Clearly, the deglacial reef microbialites present in the IODP 310 cores did not follow disturbances such as drowning or suffocation by terrestrial material, and are not "disaster forms". Given that the corals and the microbialites developed in close spatial proximity, highly elevated nutrient levels caused by fluvial or groundwater transport from the volcanic hinterland are an unlikely cause for the exceptionally voluminous development of microbialites. That voluminous deglacial reef microbialites generally are restricted to volcanic islands, however, implies that moderately, and possibly episodically elevated nutrient levels favored this type of microbialite formation.
Resumo:
A new package called adolist is presented. adolist is a tool to create, install, and uninstall lists of user ado-packages (“adolists”). For example, adolist can create a list of all user packages installed on a system and then install the same packages on another system. Moreover, ado-list can be used to put together thematic lists of packages such as, say, a list on income inequality analysis or time-series add-ons, or the list of “41 user ados everyone should know”. Such lists can then be shared with others, who can easily install and uninstall the listed packages using the adolist command.
Resumo:
A new command called adolist is presented. adolist is a tool to create, install, and uninstall lists of user ado-packages (“adolists”). For example, adolist can create a list of all user packages installed on a system and then install the same packages on another system. Moreover, ado-list can be used to put together thematic lists of packages such as, say, a list on income inequality analysis or time-series add-ons, or the list of “41 user ados everyone should know”. Such lists can then be shared with others, who can easily install and uninstall the listed packages using the adolist command.
Resumo:
A contribution is presented, intended to provide theoretical foundations for the ongoing efforts to employ global instability theory for the analysis of the classic boundary-layer flow, and address the associated issue of appropriate inflow/outflow boundary conditions to close the PDE-based global eigenvalue problem in open flows. Starting from a theoretically clean and numerically simple application, in which results are also known analytically and thus serve as a guidance for the assessment of the performance of the numerical methods employed herein, a sequence of issues is systematically built into the target application, until we arrive at one representative of open systems whose instability is presently addressed by global linear theory applied to open flows, the latter application being neither tractable theoretically nor straightforward to solve by numerical means. Experience gained along the way is documented. It regards quantification of the depar- ture of the numerical solution from the analytical one in the simple problem, the generation of numerical boundary layers at artificially truncated boundaries, no matter how far the latter are placed from the region of highest flow gradients and, ultimately the impracti- cally large number of (direct and adjoint) modes necessary to project an arbitrary initial perturbation and follow its temporal evolution by a global analysis approach, a finding which may question the purported robustness reported in the literature of the recovery of optimal perturbations as part of global analyses yielding under-resolved eigenspectra.
Resumo:
Innovations in the current interconnected world of organizations have lead to a focus on business models as a fundamental statement of direction and identity. Although industry transformations generally emanate from technological changes, recent examples suggest they may also be due to the introduction of new business models. In the past, different types of airline business models could be clearly separated from each other. However, this has changed in recent years partly due to the concentration process and partly to reaction caused by competitive pressure. At least it can be concluded that in future the distinction of different business models will remain less clear. To advance the use of business models as a concept, it is essential to be able to compare and perform analyses to identify the business models that may have the highest potential. This can essentially contribute to understanding the synergies and incompatibilities in the case of two airlines that are going in for a merger. This is illustrated by the example of Swiss Air-Lufthansa merger analysis. The idea is to develop quantitative methods and tools for comparing and analyzing Aeronautical/Airline business models. The paper identifies available methods of comparing airline business models and lays the ground work for a quantitative model of comparing airline business models. This can be a useful tool for business model analysis when two airlines are merged
Resumo:
Semantic Web aims to allow machines to make inferences using the explicit conceptualisations contained in ontologies. By pointing to ontologies, Semantic Web-based applications are able to inter-operate and share common information easily. Nevertheless, multilingual semantic applications are still rare, owing to the fact that most online ontologies are monolingual in English. In order to solve this issue, techniques for ontology localisation and translation are needed. However, traditional machine translation is difficult to apply to ontologies, owing to the fact that ontology labels tend to be quite short in length and linguistically different from the free text paradigm. In this paper, we propose an approach to enhance machine translation of ontologies based on exploiting the well-structured concept descriptions contained in the ontology. In particular, our approach leverages the semantics contained in the ontology by using Cross Lingual Explicit Semantic Analysis (CLESA) for context-based disambiguation in phrase-based Statistical Machine Translation (SMT). The presented work is novel in the sense that application of CLESA in SMT has not been performed earlier to the best of our knowledge.