446 resultados para Network nodes
Resumo:
The brain's functional network exhibits many features facilitating functional specialization, integration, and robustness to attack. Using graph theory to characterize brain networks, studies demonstrate their small-world, modular, and "rich-club" properties, with deviations reported in many common neuropathological conditions. Here we estimate the heritability of five widely used graph theoretical metrics (mean clustering coefficient (γ), modularity (Q), rich-club coefficient (ϕnorm), global efficiency (λ), small-worldness (σ)) over a range of connection densities (k=5-25%) in a large cohort of twins (N=592, 84 MZ and 89 DZ twin pairs, 246 single twins, age 23±2.5). We also considered the effects of global signal regression (GSR). We found that the graph metrics were moderately influenced by genetic factors h2 (γ=47-59%, Q=38-59%, ϕnorm=0-29%, λ=52-64%, σ=51-59%) at lower connection densities (≤15%), and when global signal regression was implemented, heritability estimates decreased substantially h2 (γ=0-26%, Q=0-28%, ϕnorm=0%, λ=23-30%, σ=0-27%). Distinct network features were phenotypically correlated (|r|=0.15-0.81), and γ, Q, and λ were found to be influenced by overlapping genetic factors. Our findings suggest that these metrics may be potential endophenotypes for psychiatric disease and suitable for genetic association studies, but that genetic effects must be interpreted with respect to methodological choices.
Resumo:
Anatomical brain networks change throughout life and with diseases. Genetic analysis of these networks may help identify processes giving rise to heritable brain disorders, but we do not yet know which network measures are promising for genetic analyses. Many factors affect the downstream results, such as the tractography algorithm used to define structural connectivity. We tested nine different tractography algorithms and four normalization methods to compute brain networks for 853 young healthy adults (twins and their siblings). We fitted genetic structural equation models to all nine network measures, after a normalization step to increase network consistency across tractography algorithms. Probabilistic tractography algorithms with global optimization (such as Probtrackx and Hough) yielded higher heritability statistics than 'greedy' algorithms (such as FACT) which process small neighborhoods at each step. Some global network measures (probtrackx-derived GLOB and ST) showed significant genetic effects, making them attractive targets for genome-wide association studies.
Resumo:
The JoMeC Network project had three key objectives. These were to: 1. Benchmark the pedagogical elements of journalism, media and communication (JoMeC) programs at Australian universities in order to develop a set of minimum academic standards, to be known as Threshold Learning Outcomes (TLOs), which would applicable to the disciplines of Journalism, Communication and/or Media Studies, and Public Relations; 2. Build a learning and teaching network of scholars across the JoMeC disciplines to support collaboration, develop leadership potential among educators, and progress shared priorities; 3. Create an online resources hub to support learning and teaching excellence and foster leadership in learning and teaching in the JoMeC disciplines. In order to benchmark the pedagogical elements of the JoMeC disciplines, the project started with a comprehensive review of the disciplinary settings of journalism, media and communication-related programs within Higher Education in Australia plus an analysis of capstone units (or subjects) offered in JoMeC-related degrees. This audit revealed a diversity of degree titles, disciplinary foci, projected career outcomes and pedagogical styles in the 36 universities that offered JoMeC-related degrees in 2012, highlighting the difficulties of classifying the JoMeC disciplines collectively or singularly. Instead of attempting to map all disciplines related to journalism, media and communication, the project team opted to create generalised TLOs for these fields, coupled with detailed TLOs for bachelor-level qualifications in three selected JoMeC disciplines: Journalism, Communication and/or Media Studies, and Public Relations. The initial review’s outcomes shaped the methodology that was used to develop the TLOs. Given the complexity of the JoMeC disciplines and the diversity of degrees across the network, the project team deployed an issue-framing process to create TLO statements. This involved several phases, including discussions with an issue-framing team (an advisory group of representatives from different disciplinary areas); research into accreditation requirements and industry-produced materials about employment expectations; evaluation of learning outcomes from universities across Australia; reviews of scholarly literature; as well as input from disciplinary leaders in a variety of forms. Draft TLOs were refined after further consultation with industry stakeholders and the academic community via email, telephone interviews, and meetings and public forums at conferences. This process was used to create a set of common TLOs for JoMeC disciplines in general and extended TLO statements for the specific disciplines of Journalism and Public Relations. A TLO statement for Communication and/or Media Studies remains in draft form. The Australian and New Zealand Communication Association (ANZCA) and Journalism Education and Research Association of Australian (JERAA) have agreed to host meetings to review, revise and further develop the TLOs. The aim is to support the JoMeC Network’s sustainability and the TLOs’ future development and use.
Resumo:
A Delay Tolerant Network (DTN) is a dynamic, fragmented, and ephemeral network formed by a large number of highly mobile nodes. DTNs are ephemeral networks with highly mobile autonomous nodes. This requires distributed and self-organised approaches to trust management. Revocation and replacement of security credentials under adversarial influence by preserving the trust on the entity is still an open problem. Existing methods are mostly limited to detection and removal of malicious nodes. This paper makes use of the mobility property to provide a distributed, self-organising, and scalable revocation and replacement scheme. The proposed scheme effectively utilises the Leverage of Common Friends (LCF) trust system concepts to revoke compromised security credentials, replace them with new ones, whilst preserving the trust on them. The level of achieved entity confidence is thereby preserved. Security and performance of the proposed scheme is evaluated using an experimental data set in comparison with other schemes based around the LCF concept. Our extensive experimental results show that the proposed scheme distributes replacement credentials up to 35% faster and spreads spoofed credentials of strong collaborating adversaries up to 50% slower without causing any significant increase on the communication and storage overheads, when compared to other LCF based schemes.
Resumo:
Solving large-scale all-to-all comparison problems using distributed computing is increasingly significant for various applications. Previous efforts to implement distributed all-to-all comparison frameworks have treated the two phases of data distribution and comparison task scheduling separately. This leads to high storage demands as well as poor data locality for the comparison tasks, thus creating a need to redistribute the data at runtime. Furthermore, most previous methods have been developed for homogeneous computing environments, so their overall performance is degraded even further when they are used in heterogeneous distributed systems. To tackle these challenges, this paper presents a data-aware task scheduling approach for solving all-to-all comparison problems in heterogeneous distributed systems. The approach formulates the requirements for data distribution and comparison task scheduling simultaneously as a constrained optimization problem. Then, metaheuristic data pre-scheduling and dynamic task scheduling strategies are developed along with an algorithmic implementation to solve the problem. The approach provides perfect data locality for all comparison tasks, avoiding rearrangement of data at runtime. It achieves load balancing among heterogeneous computing nodes, thus enhancing the overall computation time. It also reduces data storage requirements across the network. The effectiveness of the approach is demonstrated through experimental studies.
Resumo:
Network Interfaces (NIs) are used in Multiprocessor System-on-Chips (MPSoCs) to connect CPUs to a packet switched Network-on-Chip. In this work we introduce a new NI architecture for our hierarchical CoreVA-MPSoC. The CoreVA-MPSoC targets streaming applications in embedded systems. The main contribution of this paper is a system-level analysis of different NI configurations, considering both software and hardware costs for NoC communication. Different configurations of the NI are compared using a benchmark suite of 10 streaming applications. The best performing NI configuration shows an average speedup of 20 for a CoreVA-MPSoC with 32 CPUs compared to a single CPU. Furthermore, we present physical implementation results using a 28 nm FD-SOI standard cell technology. A hierarchical MPSoC with 8 CPU clusters and 4 CPUs in each cluster running at 800MHz requires an area of 4.56mm2.
Resumo:
Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.
Resumo:
Extrapulmonary manifestations constitute 15 to 20% of tuberculosis cases, with lymph node tuberculosis (LNTB) as the most common form of infection. However, diagnosis and treatment advances are hindered by lack of understanding of LNTB biology. To identify host response, Mycobacterium tuberculosis infected lymph nodes from LNTB patients were studied by means of transcriptomics and quantitative proteomics analyses. The selected targets obtained by comparative analyses were validated by quantitative PCR and immunohistochemistry. This approach provided expression data for 8,728 transcripts and 102 proteins, differentially regulated in the infected human lymph node. Enhanced inflammation with upregulation of T-helper1-related genes, combined with marked dysregulation of matrix metalloproteinases, indicates tissue damage due to high immunoactivity at infected niche. This expression signature was accompanied by significant upregulation of an immunoregulatory gene, leukotriene A4 hydrolase, at both transcript and protein levels. Comparative transcriptional analyses revealed LNTB-specific perturbations. In contrast to pulmonary TB-associated increase in lipid metabolism, genes involved in fatty-acid metabolism were found to be downregulated in LNTB suggesting differential lipid metabolic signature. This study investigates the tissue molecular signature of LNTB patients for the first time and presents findings that indicate the possible mechanism of disease pathology through dysregulation of inflammatory and tissue-repair processes.
Resumo:
The Body Area Network (BAN) is an emerging technology that focuses on monitoring physiological data in, on and around the human body. BAN technology permits wearable and implanted sensors to collect vital data about the human body and transmit it to other nodes via low-energy communication. In this paper, we investigate interactions in terms of data flows between parties involved in BANs under four different scenarios targeting outdoor and indoor medical environments: hospital, home, emergency and open areas. Based on these scenarios, we identify data flow requirements between BAN elements such as sensors and control units (CUs) and parties involved in BANs such as the patient, doctors, nurses and relatives. Identified requirements are used to generate BAN data flow models. Petri Nets (PNs) are used as the formal modelling language. We check the validity of the models and compare them with the existing related work. Finally, using the models, we identify communication and security requirements based on the most common active and passive attack scenarios.
Resumo:
Deep convolutional neural networks (DCNNs) have been employed in many computer vision tasks with great success due to their robustness in feature learning. One of the advantages of DCNNs is their representation robustness to object locations, which is useful for object recognition tasks. However, this also discards spatial information, which is useful when dealing with topological information of the image (e.g. scene labeling, face recognition). In this paper, we propose a deeper and wider network architecture to tackle the scene labeling task. The depth is achieved by incorporating predictions from multiple early layers of the DCNN. The width is achieved by combining multiple outputs of the network. We then further refine the parsing task by adopting graphical models (GMs) as a post-processing step to incorporate spatial and contextual information into the network. The new strategy for a deeper, wider convolutional network coupled with graphical models has shown promising results on the PASCAL-Context dataset.
Resumo:
This paper describes the types of support that teachers are accessing through the Social Network Site (SNS) 'Facebook'. It describes six ways in which teachers support one another within online groups. It presents evidence from a study of a large, open group of teachers online over a twelve week period, repeated with multiple groups a year later over a one week period. The findings suggest that large open groups in SNSs can be a useful source of pragmatic advice for teachers but that these groups are rarely a place for reflection on or feedback about teaching practice.