77 resultados para literature-data integration
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Background and Aims Forest trees directly contribute to carbon cycling in forest soils through the turnover of their fine roots. In this study we aimed to calculate root turnover rates of common European forest tree species and to compare them with most frequently published values. Methods We compiled available European data and applied various turnover rate calculation methods to the resulting database. We used Decision Matrix and Maximum-Minimum formula as suggested in the literature. Results Mean turnover rates obtained by the combination of sequential coring and Decision Matrix were 0.86 yr−1 for Fagus sylvatica and 0.88 yr−1 for Picea abies when maximum biomass data were used for the calculation, and 1.11 yr−1 for both species when mean biomass data were used. Using mean biomass rather than maximum resulted in about 30 % higher values of root turnover. Using the Decision Matrix to calculate turnover rate doubled the rates when compared to the Maximum-Minimum formula. The Decision Matrix, however, makes use of more input information than the Maximum-Minimum formula. Conclusions We propose that calculations using the Decision Matrix with mean biomass give the most reliable estimates of root turnover rates in European forests and should preferentially be used in models and C reporting.
Resumo:
The strategic integration of the human resource (HR) function is regarded as crucial in the literature on (strategic) human resource management ((S)HRM). Evidence on the contextual or structural influences on this integration is, however, limited. The structural implications of unionism are particularly intriguing given the evolution of study of the employment relationship. Pluralism is typically seen as antithetical to SHRM, and unions as an impediment to the strategic integration of HR functions, but there are also suggestions in the literature that unionism might facilitate the strategic integration of HR. This paper deploys large-scale international survey evidence to examine the organization-level influence of unionism on this strategic integration, allowing for other established and plausible influences. The analysis reveals that exceptionally, where the organization-level role of unions is particularly contested, unionism does impede the strategic integration of HR. However, it is the predominance of the facilitation of the strategic integration of HR by unionism which is most remarkable.
Resumo:
Platelets in the circulation are triggered by vascular damage to activate, aggregate and form a thrombus that prevents excessive blood loss. Platelet activation is stringently regulated by intracellular signalling cascades, which when activated inappropriately lead to myocardial infarction and stroke. Strategies to address platelet dysfunction have included proteomics approaches which have lead to the discovery of a number of novel regulatory proteins of potential therapeutic value. Global analysis of platelet proteomes may enhance the outcome of these studies by arranging this information in a contextual manner that recapitulates established signalling complexes and predicts novel regulatory processes. Platelet signalling networks have already begun to be exploited with interrogation of protein datasets using in silico methodologies that locate functionally feasible protein clusters for subsequent biochemical validation. Characterization of these biological systems through analysis of spatial and temporal organization of component proteins is developing alongside advances in the proteomics field. This focused review highlights advances in platelet proteomics data mining approaches that complement the emerging systems biology field. We have also highlighted nucleated cell types as key examples that can inform platelet research. Therapeutic translation of these modern approaches to understanding platelet regulatory mechanisms will enable the development of novel anti-thrombotic strategies.
Resumo:
Glycogen synthase kinase 3 (GSK3, of which there are two isoforms, GSK3alpha and GSK3beta) was originally characterized in the context of regulation of glycogen metabolism, though it is now known to regulate many other cellular processes. Phosphorylation of GSK3alpha(Ser21) and GSK3beta(Ser9) inhibits their activity. In the heart, emphasis has been placed particularly on GSK3beta, rather than GSK3alpha. Importantly, catalytically-active GSK3 generally restrains gene expression and, in the heart, catalytically-active GSK3 has been implicated in anti-hypertrophic signalling. Inhibition of GSK3 results in changes in the activities of transcription and translation factors in the heart and promotes hypertrophic responses, and it is generally assumed that signal transduction from hypertrophic stimuli to GSK3 passes primarily through protein kinase B/Akt (PKB/Akt). However, recent data suggest that the situation is far more complex. We review evidence pertaining to the role of GSK3 in the myocardium and discuss effects of genetic manipulation of GSK3 activity in vivo. We also discuss the signalling pathways potentially regulating GSK3 activity and propose that, depending on the stimulus, phosphorylation of GSK3 is independent of PKB/Akt. Potential GSK3 substrates studied in relation to myocardial hypertrophy include nuclear factors of activated T cells, beta-catenin, GATA4, myocardin, CREB, and eukaryotic initiation factor 2Bvarepsilon. These and other transcription factor substrates putatively important in the heart are considered. We discuss whether cardiac pathologies could be treated by therapeutic intervention at the GSK3 level but conclude that any intervention would be premature without greater understanding of the precise role of GSK3 in cardiac processes.
Resumo:
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Resumo:
The currently available model-based global data sets of atmospheric circulation are a by-product of the daily requirement of producing initial conditions for numerical weather prediction (NWP) models. These data sets have been quite useful for studying fundamental dynamical and physical processes, and for describing the nature of the general circulation of the atmosphere. However, due to limitations in the early data assimilation systems and inconsistencies caused by numerous model changes, the available model-based global data sets may not be suitable for studying global climate change. A comprehensive analysis of global observations based on a four-dimensional data assimilation system with a realistic physical model should be undertaken to integrate space and in situ observations to produce internally consistent, homogeneous, multivariate data sets for the earth's climate system. The concept is equally applicable for producing data sets for the atmosphere, the oceans, and the biosphere, and such data sets will be quite useful for studying global climate change.
Resumo:
Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd ) compares relatively well to the satellite data at least over the ocean. The relationship between �a and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and �a as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld–�a relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between �a and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - �a relationship show a strong positive correlation between �a and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of �a, and parameterisation assumptions such as a lower bound on Nd . Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of −1.5±0.5Wm−2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clearand cloudy-sky forcings with estimates of anthropogenic �a and satellite-retrieved Nd–�a regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of −0.4±0.2Wm−2 and a cloudy-sky (aerosol indirect effect) estimate of −0.7±0.5Wm−2, with a total estimate of −1.2±0.4Wm−2.
Resumo:
The strategic integration of the human resource (HR) function is regarded as crucial in the literature on (strategic) human resource management ((S)HRM). Evidence on the contextual or structural influences on this integration is, however, limited. The structural implications of unionism are particularly intriguing given the evolution of study of the employment relationship. Pluralism is typically seen as antithetical to SHRM, and unions as an impediment to the strategic integration of HR functions, but there are also suggestions in the literature that unionism might facilitate the strategic integration of HR. This paper deploys large-scale international survey evidence to examine the organization-level influence of unionism on this strategic integration, allowing for other established and plausible influences. The analysis reveals that exceptionally, where the organization-level role of unions is particularly contested, unionism does impede the strategic integration of HR. However, it is the predominance of the facilitation of the strategic integration of HR by unionism which is most remarkable.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
Resumo:
Purpose – This paper aims to provide a synthetic review of the empirical literature on the multinational enterprise (MNE), subsidiaries and performance. Design/methodology/approach – The paper examines the following: the theoretical and conceptual foundation of multinationality (M) and performance (P) measures; the impact of MNE strategic investment motives on performance; the influence of contextual external and internal environment factors on performance; the strategy to optimize value chain activities of the MNE by cooperating with external partners in an asymmetric network, the key drivers of enhanced shareholder value and the implications of performance; and the need to access primary data provided by firms and managers themselves when analyzing the internal functioning of the MNE and its subsidiaries. Findings – The overall message from this literature review is that empirical research should be designed on the basis of relevant theoretical and conceptual foundations of the performance construct. Originality/value – The paper provides a systematic and synthetic review of theoretical and empirical literature.
Resumo:
The susceptibility of a catchment to flooding is affected by its soil moisture prior to an extreme rainfall event. While soil moisture is routinely observed by satellite instruments, results from previous work on the assimilation of remotely sensed soil moisture into hydrologic models have been mixed. This may have been due in part to the low spatial resolution of the observations used. In this study, the remote sensing aspects of a project attempting to improve flow predictions from a distributed hydrologic model by assimilating soil moisture measurements are described. Advanced Synthetic Aperture Radar (ASAR) Wide Swath data were used to measure soil moisture as, unlike low resolution microwave data, they have sufficient resolution to allow soil moisture variations due to local topography to be detected, which may help to take into account the spatial heterogeneity of hydrological processes. Surface soil moisture content (SSMC) was measured over the catchments of the Severn and Avon rivers in the South West UK. To reduce the influence of vegetation, measurements were made only over homogeneous pixels of improved grassland determined from a land cover map. Radar backscatter was corrected for terrain variations and normalized to a common incidence angle. SSMC was calculated using change detection. To search for evidence of a topographic signal, the mean SSMC from improved grassland pixels on low slopes near rivers was compared to that on higher slopes. When the mean SSMC on low slopes was 30–90%, the higher slopes were slightly drier than the low slopes. The effect was reversed for lower SSMC values. It was also more pronounced during a drying event. These findings contribute to the scant information in the literature on the use of high resolution SAR soil moisture measurement to improve hydrologic models.