842 resultados para farm accountancy data network
Resumo:
The Iowa Correctional Offender Network (ICON) is a data collection system that was first deployed in community corrections in 2000 after two years of planning, and was integrated with the institutions in 2004. The purpose of ICON is to collect and organize the data necessary to make informed decisions.
Resumo:
There is growing interest in understanding the role of the non-injured contra-lateral hemisphere in stroke recovery. In the experimental field, histological evidence has been reported that structural changes occur in the contra-lateral connectivity and circuits during stroke recovery. In humans, some recent imaging studies indicated that contra-lateral sub-cortical pathways and functional and structural cortical networks are remodeling, after stroke. Structural changes in the contra-lateral networks, however, have never been correlated to clinical recovery in patients. To determine the importance of the contra-lateral structural changes in post-stroke recovery, we selected a population of patients with motor deficits after stroke affecting the motor cortex and/or sub-cortical motor white matter. We explored i) the presence of Generalized Fractional Anisotropy (GFA) changes indicating structural alterations in the motor network of patientsâeuro? contra-lateral hemisphere as well as their longitudinal evolution ii) the correlation of GFA changes with patientsâeuro? clinical scores, stroke size and demographics data iii) and a predictive model.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
The Iowa Communications Network (ICN) is the country’s premier distance learning and state government Network, committed to providing Iowans with convenient, equal access to education, government and healthcare. The Network makes it possible for Iowans, physically separated by location, to interact in an efficient, creative, and cost-effective manner. ICN provides high-speed Internet, data, video conferencing, and voice (phone) services to authorized users, under Code of Iowa, which includes: K-12 schools, higher education, hospitals, state and federal government, National Guard armories, and libraries.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
Termed the “silent epidemic,” traumatic brain injury (TBI) is the most debilitating outcome of injury, and is characterized by the irreversibility of its damages, long-term effects on quality of life and healthcare costs. The latest data available from the CDC estimate that nationally, 52,000 people die each year from TBI2. In Iowa, TBI is a major public health problem. The numbers and rates of hospitalizations and emergency department (ED) visits due to TBIs are steadily increasing. From 2006 to 2008, there were on average 545 injury deaths per year. Among the injured Iowans, TBI constituted nearly 30 percent (545) of all injury deaths, ten percent (1,591) of people hospitalized and seven percent (17,696) of ED visitors. 3 The state of Iowa has been supporting secondary prevention services to TBI survivors for several years. An Iowa organization that has made a significant effort in assisting TBI survivors is the Brain Injury Association of Iowa (BIAIA). The BIAIA administers the IBIRN program in cooperation with the Iowa Department of Public Health (IDPH) through HRSA TBI Implementation grant funding and state appropriations.
Proteomic data from human cell cultures refine mechanisms of chaperone-mediated protein homeostasis.
Resumo:
In the crowded environment of human cells, folding of nascent polypeptides and refolding of stress-unfolded proteins is error prone. Accumulation of cytotoxic misfolded and aggregated species may cause cell death, tissue loss, degenerative conformational diseases, and aging. Nevertheless, young cells effectively express a network of molecular chaperones and folding enzymes, termed here "the chaperome," which can prevent formation of potentially harmful misfolded protein conformers and use the energy of adenosine triphosphate (ATP) to rehabilitate already formed toxic aggregates into native functional proteins. In an attempt to extend knowledge of chaperome mechanisms in cellular proteostasis, we performed a meta-analysis of human chaperome using high-throughput proteomic data from 11 immortalized human cell lines. Chaperome polypeptides were about 10 % of total protein mass of human cells, half of which were Hsp90s and Hsp70s. Knowledge of cellular concentrations and ratios among chaperome polypeptides provided a novel basis to understand mechanisms by which the Hsp60, Hsp70, Hsp90, and small heat shock proteins (HSPs), in collaboration with cochaperones and folding enzymes, assist de novo protein folding, import polypeptides into organelles, unfold stress-destabilized toxic conformers, and control the conformal activity of native proteins in the crowded environment of the cell. Proteomic data also provided means to distinguish between stable components of chaperone core machineries and dynamic regulatory cochaperones.
Resumo:
Background: This study analyzed prognostic factors and treatment outcomes of primary thyroid lymphoma. Patients and Methods: Data were retrospectively collected for 87 patients (53 stage I and 34 stage II) with median age 65 years. Fifty-two patients were treated with single modality (31 with chemotherapy alone and 21 with radiotherapy alone) and 35 with combined modality treatment. Median follow-up was 51 months. Results: Sixty patients had aggressive lymphoma and 27 had indolent lymphoma. The 5- and 10-year overall survival (OS) rates were 74% and 71%, respectively, and the disease-free survival (DFS) rates were 68% and 64%. Univariate analysis revealed that age, tumor size, stage, lymph node involvement, B symptoms, and treatment modality were prognostic factors for OS, DFS, and local control (LC). Patients with thyroiditis had significantly better LC rates. In multivariate analysis, OS was influenced by age, B symptoms, lymph node involvement, and tumor size, whereas DFS and LC were influenced by B symptoms and tumor size. Compared with single modality treatment, patients treated with combined modality had better 5-year OS, DFS, and LC. Conclusions: Combined modality leads to an excellent prognosis for patients with aggressive lymphoma but does not improve OS and LC in patients with indolent lymphoma.
Resumo:
The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.
Resumo:
In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.
Resumo:
Although cross-sectional diffusion tensor imaging (DTI) studies revealed significant white matter changes in mild cognitive impairment (MCI), the utility of this technique in predicting further cognitive decline is debated. Thirty-five healthy controls (HC) and 67 MCI subjects with DTI baseline data were neuropsychologically assessed at one year. Among them, there were 40 stable (sMCI; 9 single domain amnestic, 7 single domain frontal, 24 multiple domain) and 27 were progressive (pMCI; 7 single domain amnestic, 4 single domain frontal, 16 multiple domain). Fractional anisotropy (FA) and longitudinal, radial, and mean diffusivity were measured using Tract-Based Spatial Statistics. Statistics included group comparisons and individual classification of MCI cases using support vector machines (SVM). FA was significantly higher in HC compared to MCI in a distributed network including the ventral part of the corpus callosum, right temporal and frontal pathways. There were no significant group-level differences between sMCI versus pMCI or between MCI subtypes after correction for multiple comparisons. However, SVM analysis allowed for an individual classification with accuracies up to 91.4% (HC versus MCI) and 98.4% (sMCI versus pMCI). When considering the MCI subgroups separately, the minimum SVM classification accuracy for stable versus progressive cognitive decline was 97.5% in the multiple domain MCI group. SVM analysis of DTI data provided highly accurate individual classification of stable versus progressive MCI regardless of MCI subtype, indicating that this method may become an easily applicable tool for early individual detection of MCI subjects evolving to dementia.
Resumo:
A survey was sent to over 200 Federal, State, and local agencies that might use streamflow data collected by the U. S. Geological Survey in Iowa. A total of 181 forms were returned and 112 agencies indicated that they use streamflow data. The responses show that streamflow data from the Iowa USGS stream-gaging network, which in 1996 is composed of 117 stations, are used by many agencies for many purposes and that many stations provide streamflow data that fulfill a variety of joint purposes. The median number of respondents per station that use data from the station was 6 and the median number of data-use categories indicated per station was 9. The survey results can be used by agencies that fund the Iowa USGS stream-gaging network to help them decide which stations to continue to support if it becomes necessary to reduce the size of the stream-gaging network.
Resumo:
ABSTRACT : A firm's competitive advantage can arise from internal resources as well as from an interfirm network. -This dissertation investigates the competitive advantage of a firm involved in an innovation network by integrating strategic management theory and social network theory. It develops theory and provides empirical evidence that illustrates how a networked firm enables the network value and appropriates this value in an optimal way according to its strategic purpose. The four inter-related essays in this dissertation provide a framework that sheds light on the extraction of value from an innovation network by managing and designing the network in a proactive manner. The first essay reviews research in social network theory and knowledge transfer management, and identifies the crucial factors of innovation network configuration for a firm's learning performance or innovation output. The findings suggest that network structure, network relationship, and network position all impact on a firm's performance. Although the previous literature indicates that there are disagreements about the impact of dense or spare structure, as well as strong or weak ties, case evidence from Chinese software companies reveals that dense and strong connections with partners are positively associated with firms' performance. The second essay is a theoretical essay that illustrates the limitations of social network theory for explaining the source of network value and offers a new theoretical model that applies resource-based view to network environments. It suggests that network configurations, such as network structure, network relationship and network position, can be considered important network resources. In addition, this essay introduces the concept of network capability, and suggests that four types of network capabilities play an important role in unlocking the potential value of network resources and determining the distribution of network rents between partners. This essay also highlights the contingent effects of network capability on a firm's innovation output, and explains how the different impacts of network capability depend on a firm's strategic choices. This new theoretical model has been pre-tested with a case study of China software industry, which enhances the internal validity of this theory. The third essay addresses the questions of what impact network capability has on firm innovation performance and what are the antecedent factors of network capability. This essay employs a structural equation modelling methodology that uses a sample of 211 Chinese Hi-tech firms. It develops a measurement of network capability and reveals that networked firms deal with cooperation between, and coordination with partners on different levels according to their levels of network capability. The empirical results also suggests that IT maturity, the openness of culture, management system involved, and experience with network activities are antecedents of network capabilities. Furthermore, the two-group analysis of the role of international partner(s) shows that when there is a culture and norm gap between foreign partners, a firm must mobilize more resources and effort to improve its performance with respect to its innovation network. The fourth essay addresses the way in which network capabilities influence firm innovation performance. By using hierarchical multiple regression with data from Chinese Hi-tech firms, the findings suggest that there is a significant partial mediating effect of knowledge transfer on the relationships between network capabilities and innovation performance. The findings also reveal that the impacts of network capabilities divert with the environment and strategic decision the firm has made: exploration or exploitation. Network constructing capability provides a greater positive impact on and yields more contributions to innovation performance than does network operating capability in an exploration network. Network operating capability is more important than network constructing capability for innovative firms in an exploitation network. Therefore, these findings highlight that the firm can shape the innovation network proactively for better benefits, but when it does so, it should adjust its focus and change its efforts in accordance with its innovation purposes or strategic orientation.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).