101 resultados para network solution
em Université de Lausanne, Switzerland
Resumo:
Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.
Resumo:
Gene-on-gene regulations are key components of every living organism. Dynamical abstract models of genetic regulatory networks help explain the genome's evolvability and robustness. These properties can be attributed to the structural topology of the graph formed by genes, as vertices, and regulatory interactions, as edges. Moreover, the actual gene interaction of each gene is believed to play a key role in the stability of the structure. With advances in biology, some effort was deployed to develop update functions in Boolean models that include recent knowledge. We combine real-life gene interaction networks with novel update functions in a Boolean model. We use two sub-networks of biological organisms, the yeast cell-cycle and the mouse embryonic stem cell, as topological support for our system. On these structures, we substitute the original random update functions by a novel threshold-based dynamic function in which the promoting and repressing effect of each interaction is considered. We use a third real-life regulatory network, along with its inferred Boolean update functions to validate the proposed update function. Results of this validation hint to increased biological plausibility of the threshold-based function. To investigate the dynamical behavior of this new model, we visualized the phase transition between order and chaos into the critical regime using Derrida plots. We complement the qualitative nature of Derrida plots with an alternative measure, the criticality distance, that also allows to discriminate between regimes in a quantitative way. Simulation on both real-life genetic regulatory networks show that there exists a set of parameters that allows the systems to operate in the critical region. This new model includes experimentally derived biological information and recent discoveries, which makes it potentially useful to guide experimental research. The update function confers additional realism to the model, while reducing the complexity and solution space, thus making it easier to investigate.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Resumo:
Arabidopsis thaliana PHO1 is primarily expressed in the root vascular cylinder and is involved in the transfer of inorganic phosphate (Pi) from roots to shoots. To analyze the role of PHO1 in transport of Pi, we have generated transgenic plants expressing PHO1 in ectopic A. thaliana tissues using an estradiol-inducible promoter. Leaves treated with estradiol showed strong PHO1 expression, leading to detectable accumulation of PHO1 protein. Estradiol-mediated induction of PHO1 in leaves from soil-grown plants, in leaves and roots of plants grown in liquid culture, or in leaf mesophyll protoplasts, was all accompanied by the specific release of Pi to the extracellular medium as early as 2-3 h after addition of estradiol. Net Pi export triggered by PHO1 induction was enhanced by high extracellular Pi and weakly inhibited by the proton-ionophore carbonyl cyanide m-chlorophenylhydrazone. Expression of a PHO1-GFP construct complementing the pho1 mutant revealed GFP expression in punctate structures in the pericycle cells but no fluorescence at the plasma membrane. When expressed in onion epidermal cells or in tobacco mesophyll cells, PHO1-GFP was associated with similar punctate structures that co-localized with the Golgi/trans-Golgi network and uncharacterized vesicles. However, PHO1-GFP could be partially relocated to the plasma membrane in leaves infiltrated with a high-phosphate solution. Together, these results show that PHO1 can trigger Pi export in ectopic plant cells, strongly indicating that PHO1 is itself a Pi exporter. Interestingly, PHO1-mediated Pi export was associated with its localization to the Golgi and trans-Golgi networks, revealing a role for these organelles in Pi transport.
Resumo:
Adaptive immunity is initiated in T-cell zones of secondary lymphoid organs. These zones are organized in a rigid 3D network of fibroblastic reticular cells (FRCs) that are a rich cytokine source. In response to lymph-borne antigens, draining lymph nodes (LNs) expand several folds in size, but the fate and role of the FRC network during immune response is not fully understood. Here we show that T-cell responses are accompanied by the rapid activation and growth of FRCs, leading to an expanded but similarly organized network of T-zone FRCs that maintains its vital function for lymphocyte trafficking and survival. In addition, new FRC-rich environments were observed in the expanded medullary cords. FRCs are activated within hours after the onset of inflammation in the periphery. Surprisingly, FRC expansion depends mainly on trapping of naïve lymphocytes that is induced by both migratory and resident dendritic cells. Inflammatory signals are not required as homeostatic T-cell proliferation was sufficient to trigger FRC expansion. Activated lymphocytes are also dispensable for this process, but can enhance the later growth phase. Thus, this study documents the surprising plasticity as well as the complex regulation of FRC networks allowing the rapid LN hyperplasia that is critical for mounting efficient adaptive immunity.
Resumo:
Functional connectivity in human brain can be represented as a network using electroencephalography (EEG) signals. These networks--whose nodes can vary from tens to hundreds--are characterized by neurobiologically meaningful graph theory metrics. This study investigates the degree to which various graph metrics depend upon the network size. To this end, EEGs from 32 normal subjects were recorded and functional networks of three different sizes were extracted. A state-space based method was used to calculate cross-correlation matrices between different brain regions. These correlation matrices were used to construct binary adjacency connectomes, which were assessed with regards to a number of graph metrics such as clustering coefficient, modularity, efficiency, economic efficiency, and assortativity. We showed that the estimates of these metrics significantly differ depending on the network size. Larger networks had higher efficiency, higher assortativity and lower modularity compared to those with smaller size and the same density. These findings indicate that the network size should be considered in any comparison of networks across studies.
Resumo:
PURPOSE: To better define outcome and prognostic factors in primary pineal tumors. MATERIALS AND METHODS: Thirty-five consecutive patients from seven academic centers of the Rare Cancer Network diagnosed between 1988 and 2006 were included. Median age was 36 years. Surgical resection consisted of biopsy in 12 cases and resection in 21 (2 cases with unknown resection). All patients underwent radiotherapy and 12 patients received also chemotherapy. RESULTS: Histological subtypes were pineoblastoma (PNB) in 21 patients, pineocytoma (PC) in 8 patients and pineocytoma with intermediate differentiation in 6 patients. Six patients with PNB had evidence of spinal seeding. Fifteen patients relapsed (14 PNB and 1 PC) with PNB cases at higher risk (p = 0.031). Median survival time was not reached. Median disease-free survival was 82 months (CI 50 % 28-275). In univariate analysis, age younger than 36 years was an unfavorable prognostic factor (p = 0.003). Patients with metastases at diagnosis had poorer survival (p = 0.048). Late side effects related to radiotherapy were dementia, leukoencephalopathy or memory loss in seven cases, occipital ischemia in one, and grade 3 seizures in two cases. Side effects related to chemotherapy were grade 3-4 leucopenia in five cases, grade 4 thrombocytopenia in three cases, grade 2 anemia in two cases, grade 4 pancytopenia in one case, grade 4 vomiting in one case and renal failure in one case. CONCLUSIONS: Age and dissemination at diagnosis influenced survival in our series. The prevalence of chronic toxicity suggests that new adjuvant strategies are advisable.
Resumo:
As part of a project to use the long-lived (T(1/2)=1200a) (166m)Ho as reference source in its reference ionisation chamber, IRA standardised a commercially acquired solution of this nuclide using the 4pibeta-gamma coincidence and 4pigamma (NaI) methods. The (166m)Ho solution supplied by Isotope Product Laboratories was measured to have about 5% Europium impurities (3% (154)Eu, 0.94% (152)Eu and 0.9% (155)Eu). Holmium had therefore to be separated from europium, and this was carried out by means of ion-exchange chromatography. The holmium fractions were collected without europium contamination: 162h long HPGe gamma measurements indicated no europium impurity (detection limits of 0.01% for (152)Eu and (154)Eu, and 0.03% for (155)Eu). The primary measurement of the purified (166m)Ho solution with the 4pi (PC) beta-gamma coincidence technique was carried out at three gamma energy settings: a window around the 184.4keV peak and gamma thresholds at 121.8 and 637.3keV. The results show very good self-consistency, and the activity concentration of the solution was evaluated to be 45.640+/-0.098kBq/g (0.21% with k=1). The activity concentration of this solution was also measured by integral counting with a well-type 5''x5'' NaI(Tl) detector and efficiencies computed by Monte Carlo simulations using the GEANT code. These measurements were mutually consistent, while the resulting weighted average of the 4pi NaI(Tl) method was found to agree within 0.15% with the result of the 4pibeta-gamma coincidence technique. An ampoule of this solution and the measured value of the concentration were submitted to the BIPM as a contribution to the Système International de Référence.
Resumo:
Water transport in wood is vital for the survival of trees. With synchrotron radiation X-ray tomographic microscopy (SRXTM), it has become possible to characterize and quantify the three-dimensional (3D) network formed by vessels that are responsible for longitudinal transport. In the present study, the spatial size dependence of vessels and the organization inside single growth rings in terms of vessel-induced porosity was studied by SRXTM. Network characteristics, such as connectivity, were deduced by digital image analysis from the processed tomographic data and related to known complex network topologies.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
The first scientific meeting of the newly established European SYSGENET network took place at the Helmholtz Centre for Infection Research (HZI) in Braunschweig, April 7-9, 2010. About 50 researchers working in the field of systems genetics using mouse genetic reference populations (GRP) participated in the meeting and exchanged their results, phenotyping approaches, and data analysis tools for studying systems genetics. In addition, the future of GRP resources and phenotyping in Europe was discussed.