861 resultados para Multi Domain Information Model
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.
Resumo:
Fibronectin (FN) is a large extracellular matrix (ECM) protein that is made up of
type I (FNI), type II (FNII), & type III (FNIII) domains. It assembles into an insoluble
supra-‐‑molecular structure: the fibrillar FN matrix. FN fibrillogenesis is a cell‐‑mediated process, which is initiated when FN binds to integrins on the cell surface. The FN matrix plays an important role in cell migration, proliferation, signaling & adhesion. Despite decades of research, the FN matrix is one of the least understood supra-‐‑molecular protein assemblies. There have been several attempts to elucidate the exact mechanism of matrix assembly resulting in significant progress in the field but it is still unclear as to what are FN-‐‑FN interactions, the nature of these interactions and the domains of FN that
are in contact with each other. FN matrix fibrils are elastic in nature. Two models have been proposed to explain the elasticity of the fibrils. The first model: the ‘domain unfolding’ model postulates that the unraveling of FNIII domains under tension explains fibril elasticity.
The second model relies on the conformational change of FN from compact to extended to explain fibril elasticity. FN contain 15 FNIII domains, each a 7-‐‑strand beta sandwich. Earlier work from our lab used the technique of labeling a buried Cys to study the ‘domain unfolding’ model. They used mutant FNs containing a buried Cys in a single FNIII domain and found that 6 of the 15 FNIII domains label in matrix fibrils. Domain unfolding due to tension, matrix associated conformational changes or spontaneous folding and unfolding are all possible explanation for labeling of the buried Cys. The present study also uses the technique of labeling a buried Cys to address whether it is spontaneous folding and unfolding that labels FNIII domains in cell culture. We used thiol reactive DTNB to measure the kinetics of labeling of buried Cys in eleven FN III domains over a wide range of urea concentrations (0-‐‑9M). The kinetics data were globally fit using Mathematica. The results are equivalent to those of H-‐‑D exchange, and
provide a comprehensive analysis of stability and unfolding/folding kinetics of each
domain. For two of the six domains spontaneous folding and unfolding is possibly the reason for labeling in cell culture. For the rest of the four domains it is probably matrix associated conformational changes or tension induced unfolding.
A long-‐‑standing debate in the protein-‐‑folding field is whether unfolding rate
constants or folding rate constants correlate to the stability of a protein. FNIII domains all have the same ß sandwich structure but very different stabilities and amino acid sequences. Our study analyzed the kinetics of unfolding and folding and stabilities of eleven FNIII domains and our results show that folding rate constants for FNIII domains are relatively similar and the unfolding rates vary widely and correlate to stability. FN forms a fibrillar matrix and the FN-‐‑FN interactions during matrix fibril formation are not known. FNI 1-‐‑9 or the N-‐‑ terminal region is indispensible for matrix formation and its major binding partner has been shown to be FNIII 2. Earlier work from our lab, using FRET analysis showed that the interaction of FNI 1-‐‑9 with a destabilized FNIII 2 (missing the G strand, FNIII 2ΔG) reduces the FRET efficiency. This efficiency is restored in the presence of FUD (bacterial adhesion from S. pyogenes) that has been known to interact with FNI 1-‐‑9 via a tandem ß zipper. In the present study we
use FRET analysis and a series of deletion mutants of FNIII 2ΔG to study the shortest fragment of FNIII 2ΔG that is required to bind FNI 1-‐‑9. Our results presented here are qualitative and show that FNIII 2ΔC’EFG is the shortest fragment required to bind FNI 1-‐‑9. Deletion of one more strand abolishes the interaction with FNI 1-‐‑9.
Resumo:
As the Web evolves unexpectedly fast, information grows explosively. Useful resources become more and more difficult to find because of their dynamic and unstructured characteristics. A vertical search engine is designed and implemented towards a specific domain. Instead of processing the giant volume of miscellaneous information distributed in the Web, a vertical search engine targets at identifying relevant information in specific domains or topics and eventually provides users with up-to-date information, highly focused insights and actionable knowledge representation. As the mobile device gets more popular, the nature of the search is changing. So, acquiring information on a mobile device poses unique requirements on traditional search engines, which will potentially change every feature they used to have. To summarize, users are strongly expecting search engines that can satisfy their individual information needs, adapt their current situation, and present highly personalized search results. In my research, the next generation vertical search engine means to utilize and enrich existing domain information to close the loop of vertical search engine's system that mutually facilitate knowledge discovering, actionable information extraction, and user interests modeling and recommendation. I investigate three problems in which domain taxonomy plays an important role, including taxonomy generation using a vertical search engine, actionable information extraction based on domain taxonomy, and the use of ensemble taxonomy to catch user's interests. As the fundamental theory, ultra-metric, dendrogram, and hierarchical clustering are intensively discussed. Methods on taxonomy generation using my research on hierarchical clustering are developed. The related vertical search engine techniques are practically used in Disaster Management Domain. Especially, three disaster information management systems are developed and represented as real use cases of my research work.
Resumo:
Hysteresis measurements have been carried out on a suite of ocean-floor basalts with ages ranging from Quaternary to Cretaceous. Approximately linear, yet separate, relationships between coercivity (Bc) and the ratio of saturation remanence/saturation magnetization (Mrs/Ms) are observed for massive doleritic basalts with low-Ti magnetite and for pillow basalts with multi-domain titanomagnetites (with x= 0.6). Even when the MORB has undergone lowtemperature oxidation resulting in titanomaghemite, the parameters are still distinguishable, although offset from the trend for unoxidized multidomain titanomagnetite. The parameters for these iron oxides with different titanium content reveal contrasting trends that can be explained by the different saturation magnetizations of the mineral types. This plot provides a previously underutilized and non-destructive method to detect the presence of low-titanium magnetite in igneous rocks, notably MORB.
Resumo:
This paper considers a wirelessly powered wiretap channel, where an energy constrained multi-antenna information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. Based on a simple time-switching protocol where power transfer and information transmission are separated in time, we investigate two popular multi-antenna transmission schemes at the information source, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability and average secrecy rate for both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that by exploiting the full knowledge of channel state information (CSI), we can achieve a better secrecy performance, e.g., with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Moreover, we show that the additional level of randomness induced by wireless power transfer does not affect the secrecy performance in the high SNR regime. Finally, our theoretical claims are validated by the numerical results.
Resumo:
Resource management policies are frequently designed and planned to target specific needs of particular sectors, without taking into account the interests of other sectors who share the same resources. In a climate of resource depletion, population growth, increase in energy demand and climate change awareness, it is of great importance to promote the assessment of intersectoral linkages and, by doing so, understand their effects and implications. This need is further augmented when common use of resources might not be solely relevant at national level, but also when the distribution of resources ranges over different nations. This dissertation focuses on the study of the energy systems of five south eastern European countries, which share the Sava River Basin, using a water-food(agriculture)-energy nexus approach. In the case of the electricity generation sector, the use of water is essential for the integrity of the energy systems, as the electricity production in the riparian countries relies on two major technologies dependent on water resources: hydro and thermal power plants. For example, in 2012, an average of 37% of the electricity production in the SRB countries was generated by hydropower and 61% in thermal power plants. Focusing on the SRB, in terms of existing installed capacities, the basin accommodates close to a tenth of all hydropower capacity while providing water for cooling to 42% of the net capacity of thermal power currently in operation in the basin. This energy-oriented nexus study explores the dependency on the basin’s water resources of the energy systems in the region for the period between 2015 and 2030. To do so, a multi-country electricity model was developed to provide a quantification ground to the analysis, using the open-source software modelling tool OSeMOSYS. Three main areas are subject to analysis: first, the impact of energy efficiency and renewable energy strategies in the electricity generation mix; secondly, the potential impacts of climate change under a moderate climate change projection scenario; and finally, deriving from the latter point, the cumulative impact of an increase in water demand in the agriculture sector, for irrigation. Additionally, electricity trade dynamics are compared across the different scenarios under scrutiny, as an effort to investigate the implications of the aforementioned factors in the electricity markets in the region.
Resumo:
Supply chains have become an important focus for competitive advantage. The performance of a company increasingly depends on its ability to maintain effective and efficient relationships with its suppliers and customers. The extended enterprise (i.e. composed of several partners) needs to be dynamically formed in order to be agile and adaptable. According to the Digital Manufacturing paradigm, companies have to be able to quickly share and disseminate information regarding planning, designing and manufacturing of products. Additionally, they must be responsive to all technical and business determinants, as well as be assessed and certified for guaranteed performance. The current research intends to present a solution for the dynamic composition of the extended enterprise, formed to take advantage of market opportunities quickly and efficiently. A construction model was developed. This construction model consists of: information model, protocol model and process model. The information model has been defined based on the concepts of Supply Chain Operations Reference model (SCOR®). In this model is defined information for negotiating the participation of candidate companies in the dynamic establishment of a network for responding to a given demand for developing and manufacturing products, in seven steps as follows: request for information; request for qualification; alignment of strategy; request for proposal; request for quotation; compatibility of process; and compatibility of system. The protocol model has been elaborated and inspired in the OSI, this model provides a framework for linking customers and suppliers, indicates a sequence to be followed, in order to selecte companies to become suppliers. The process model has been implemented by means of process modeling according to the BPMN standard and, in turn, implemented as a web-based application that runs the process through its several steps, which uses forms to gather data. An application example in the context of the oil and gas industry is used for demonstrating the solution concept.
Resumo:
Different types of base fluids, such as water, engine oil, kerosene, ethanol, methanol, ethylene glycol etc. are usually used to increase the heat transfer performance in many engineering applications. But these conventional heat transfer fluids have often several limitations. One of those major limitations is that the thermal conductivity of each of these base fluids is very low and this results a lower heat transfer rate in thermal engineering systems. Such limitation also affects the performance of different equipments used in different heat transfer process industries. To overcome such an important drawback, researchers over the years have considered a new generation heat transfer fluid, simply known as nanofluid with higher thermal conductivity. This new generation heat transfer fluid is a mixture of nanometre-size particles and different base fluids. Different researchers suggest that adding spherical or cylindrical shape of uniform/non-uniform nanoparticles into a base fluid can remarkably increase the thermal conductivity of nanofluid. Such augmentation of thermal conductivity could play a more significant role in enhancing the heat transfer rate than that of the base fluid. Nanoparticles diameters used in nanofluid are usually considered to be less than or equal to 100 nm and the nanoparticles concentration usually varies from 5% to 10%. Different researchers mentioned that the smaller nanoparticles concentration with size diameter of 100 nm could enhance the heat transfer rate more significantly compared to that of base fluids. But it is not obvious what effect it will have on the heat transfer performance when nanofluids contain small size nanoparticles of less than 100 nm with different concentrations. Besides, the effect of static and moving nanoparticles on the heat transfer of nanofluid is not known too. The idea of moving nanoparticles brings the effect of Brownian motion of nanoparticles on the heat transfer. The aim of this work is, therefore, to investigate the heat transfer performance of nanofluid using a combination of smaller size of nanoparticles with different concentrations considering the Brownian motion of nanoparticles. A horizontal pipe has been considered as a physical system within which the above mentioned nanofluid performances are investigated under transition to turbulent flow conditions. Three different types of numerical models, such as single phase model, Eulerian-Eulerian multi-phase mixture model and Eulerian-Lagrangian discrete phase model have been used while investigating the performance of nanofluids. The most commonly used model is single phase model which is based on the assumption that nanofluids behave like a conventional fluid. The other two models are used when the interaction between solid and fluid particles is considered. However, two different phases, such as fluid and solid phases is also considered in the Eulerian-Eulerian multi-phase mixture model. Thus, these phases create a fluid-solid mixture. But, two phases in the Eulerian-Lagrangian discrete phase model are independent. One of them is a solid phase and the other one is a fluid phase. In addition, RANS (Reynolds Average Navier Stokes) based Standard κ-ω and SST κ-ω transitional models have been used for the simulation of transitional flow. While the RANS based Standard κ-ϵ, Realizable κ-ϵ and RNG κ-ϵ turbulent models are used for the simulation of turbulent flow. Hydrodynamic as well as temperature behaviour of transition to turbulent flows of nanofluids through the horizontal pipe is studied under a uniform heat flux boundary condition applied to the wall with temperature dependent thermo-physical properties for both water and nanofluids. Numerical results characterising the performances of velocity and temperature fields are presented in terms of velocity and temperature contours, turbulent kinetic energy contours, surface temperature, local and average Nusselt numbers, Darcy friction factor, thermal performance factor and total entropy generation. New correlations are also proposed for the calculation of average Nusselt number for both the single and multi-phase models. Result reveals that the combination of small size of nanoparticles and higher nanoparticles concentrations with the Brownian motion of nanoparticles shows higher heat transfer enhancement and thermal performance factor than those of water. Literature suggests that the use of nanofluids flow in an inclined pipe at transition to turbulent regimes has been ignored despite its significance in real-life applications. Therefore, a particular investigation has been carried out in this thesis with a view to understand the heat transfer behaviour and performance of an inclined pipe under transition flow condition. It is found that the heat transfer rate decreases with the increase of a pipe inclination angle. Also, a higher heat transfer rate is found for a horizontal pipe under forced convection than that of an inclined pipe under mixed convection.
Resumo:
Objectives: In contrast to other countries, surgery still represents the common invasive treatment for varicose veins in Germany. However, radiofrequency ablation, e.g. ClosureFast, becomes more and more popular in other countries due to potential better results and reduced side effects. This treatment option may cause less follow-up costs and is a more convenient procedure for patients, which could justify an introduction in the statutory benefits catalogue. Therefore, we aim at calculating the budget impact of a general reimbursement of ClosureFast in Germany. Methods: To assess the budget impact of including ClosureFast in the German statutory benefits catalogue, we developed a multi-cohort Markov model and compared the costs of a “World with ClosureFast” with a “World without ClosureFast” over a time horizon of five years. To address the uncertainty of input parameters, we conducted three different types of sensitivity analysis (one-way, scenario, probabilistic). Results: In the Base Case scenario, the introduction of the ClosureFast system for the treatment of varicose veins saves costs of about 19.1 Mio. € over a time horizon of five years in Germany. However, the results scatter in the sensitivity analyses due to limited evidence of some key input parameters. Conclusions: Results of the budget impact analysis indicate that a general reimbursement of ClosureFast has the potential to be cost-saving in the German Statutory Health Insurance.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
3D Surveying and Data Management towards the Realization of a Knowledge System for Cultural Heritage
Resumo:
The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.
Resumo:
Malignant Pleural Mesothelioma (MPM) is a very aggressive cancer whose incidence is growing worldwide. MPM escapes the classical models of carcinogenesis and lacks a distinctive genetic fingerprint, keeping obscure the molecular events that lead to tumorigenesis. This severely impacts on the limited therapeutic options and on the lack of specific biomarkers, concurring to make MPM one of the deadliest cancers. Here we combined a functional genome-wide loss of function CRISPR/Cas9 screening with patients’ transcriptomic and clinical data, to identify genes essential for MPM progression. Besides, we explored the role of non-coding RNAs to MPM progression by analysing gene expression profiles and clinical data from the MESO-TCGA dataset. We identified TRIM28 and the lncRNA LINC00941 as new vulnerabilities of MPM, associated with disease aggressiveness and bad outcome of patients. TRIM28 is a multi-domain protein involved in many processes, including transcription regulation. We showed that TRIM28 silencing impairs MPM cells’ growth and clonogenicity by blocking cells in mitosis. RNA-seq profiling showed that TRIM28 loss abolished the expression of major mitotic players. Our data suggest that TRIM28 is part of the B-MYB/FOXM1-MuvB complex that specifically drives the activation of mitotic genes, keeping the time of mitosis. In parallel, we found LINC00941 as strongly associated with reduced survival probability in MPM patients. LINC00941 KD profoundly reduced MPM cells’ growth, migration and invasion. This is accompanied by changes in morphology, cytoskeleton organization and cell-cell adhesion properties. RNA-seq profiling showed that LINC00941 KD impacts crucial functions of MPM, including HIF1α signalling. Collectively these data provided new insights into MPM biology and demonstrated that the integration of functional screening with patients’ clinical data is a powerful tool to highlight new non-genetic cancer dependencies that associate to a bad outcome in vivo, paving the way to new MPM-oriented targeted strategies and prognostic tools to improve patients risk-based stratification.
Resumo:
Deep learning methods are extremely promising machine learning tools to analyze neuroimaging data. However, their potential use in clinical settings is limited because of the existing challenges of applying these methods to neuroimaging data. In this study, first a data leakage type caused by slice-level data split that is introduced during training and validation of a 2D CNN is surveyed and a quantitative assessment of the model’s performance overestimation is presented. Second, an interpretable, leakage-fee deep learning software written in a python language with a wide range of options has been developed to conduct both classification and regression analysis. The software was applied to the study of mild cognitive impairment (MCI) in patients with small vessel disease (SVD) using multi-parametric MRI data where the cognitive performance of 58 patients measured by five neuropsychological tests is predicted using a multi-input CNN model taking brain image and demographic data. Each of the cognitive test scores was predicted using different MRI-derived features. As MCI due to SVD has been hypothesized to be the effect of white matter damage, DTI-derived features MD and FA produced the best prediction outcome of the TMT-A score which is consistent with the existing literature. In a second study, an interpretable deep learning system aimed at 1) classifying Alzheimer disease and healthy subjects 2) examining the neural correlates of the disease that causes a cognitive decline in AD patients using CNN visualization tools and 3) highlighting the potential of interpretability techniques to capture a biased deep learning model is developed. Structural magnetic resonance imaging (MRI) data of 200 subjects was used by the proposed CNN model which was trained using a transfer learning-based approach producing a balanced accuracy of 71.6%. Brain regions in the frontal and parietal lobe showing the cerebral cortex atrophy were highlighted by the visualization tools.
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.