105 resultados para Framework colaborativa
Resumo:
An electron rich porous metal-organic framework (MOF) has been synthesized, which acts as an effective heterogeneous catalyst for Diels-Alder reactions through encapsulation of the reactants in confined nano-channels of the framework.
Resumo:
The problem of semantic interoperability arises while integrating applications in different task domains across the product life cycle. A new shape-function-relationship (SFR) framework is proposed as a taxonomy based on which an ontology is developed. Ontology based on the SFR framework, that captures explicit definition of terminology and knowledge relationships in terms of shape, function and relationship descriptors, offers an attractive approach for solving semantic interoperability issue. Since all instances of terms are based on single taxonomy with a formal classification, mapping of terms requires a simple check on the attributes used in the classification. As a preliminary study, the framework is used to develop ontology of terms used in the aero-engine domain and the ontology is used to resolve the semantic interoperability problem in the integration of design and maintenance. Since the framework allows a single term to have multiple classifications, handling context dependent usage of terms becomes possible. Automating the classification of terms and establishing the completeness of the classification scheme are being addressed presently.
Resumo:
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. The current work attempts to establish a connection between matroid theory and network-error correcting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of network-error correcting codes to arrive at the definition of a matroidal error correcting network. An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error correcting network code if and only if it is a matroidal error correcting network associated with a representable matroid. Therefore, constructing such network-error correcting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa.
Resumo:
We describe a framework to explore and visualize the movement of cloud systems. Using techniques from computational topology and computer vision, our framework allows the user to study this movement at various scales in space and time. Such movements could have large temporal and spatial scales such as the Madden Julian Oscillation (MJO), which has a spatial scale ranging from 1000 km to 10000 km and time of oscillation of around 40 days. Embedded within these larger scale oscillations are a hierarchy of cloud clusters which could have smaller spatial and temporal scales such as the Nakazawa cloud clusters. These smaller cloud clusters, while being part of the equatorial MJO, sometimes move at speeds different from the larger scale and in a direction opposite to that of the MJO envelope. Hitherto, one could only speculate about such movements by selectively analysing data and a priori knowledge of such systems. Our framework automatically delineates such cloud clusters and does not depend on the prior experience of the user to define cloud clusters. Analysis using our framework also shows that most tropical systems such as cyclones also contain multi-scale interactions between clouds and cloud systems. We show the effectiveness of our framework to track organized cloud system during one such rainfall event which happened at Mumbai, India in July 2005 and for cyclone Aila which occurred in Bay of Bengal during May 2009.
Resumo:
Granger causality is increasingly being applied to multi-electrode neurophysiological and functional imaging data to characterize directional interactions between neurons and brain regions. For a multivariate dataset, one might be interested in different subsets of the recorded neurons or brain regions. According to the current estimation framework, for each subset, one conducts a separate autoregressive model fitting process, introducing the potential for unwanted variability and uncertainty. In this paper, we propose a multivariate framework for estimating Granger causality. It is based on spectral density matrix factorization and offers the advantage that the estimation of such a matrix needs to be done only once for the entire multivariate dataset. For any subset of recorded data, Granger causality can be calculated through factorizing the appropriate submatrix of the overall spectral density matrix.
Resumo:
Elasticity in cloud systems provides the flexibility to acquire and relinquish computing resources on demand. However, in current virtualized systems resource allocation is mostly static. Resources are allocated during VM instantiation and any change in workload leading to significant increase or decrease in resources is handled by VM migration. Hence, cloud users tend to characterize their workloads at a coarse grained level which potentially leads to under-utilized VM resources or under performing application. A more flexible and adaptive resource allocation mechanism would benefit variable workloads, such as those characterized by web servers. In this paper, we present an elastic resources framework for IaaS cloud layer that addresses this need. The framework provisions for application workload forecasting engine, that predicts at run-time the expected demand, which is input to the resource manager to modulate resource allocation based on the predicted demand. Based on the prediction errors, resources can be over-allocated or under-allocated as compared to the actual demand made by the application. Over-allocation leads to unused resources and under allocation could cause under performance. To strike a good trade-off between over-allocation and under-performance we derive an excess cost model. In this model excess resources allocated are captured as over-allocation cost and under-allocation is captured as a penalty cost for violating application service level agreement (SLA). Confidence interval for predicted workload is used to minimize this excess cost with minimal effect on SLA violations. An example case-study for an academic institute web server workload is presented. Using the confidence interval to minimize excess cost, we achieve significant reduction in resource allocation requirement while restricting application SLA violations to below 2-3%.
Resumo:
We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
It is a formidable challenge to arrange tin nanoparticles in a porous matrix for the achievement of high specific capacity and current rate capability anode for lithium-ion batteries. This article discusses a simple and novel synthesis of arranging tin nanoparticles with carbon in a porous configuration for application as anode in lithium-ion batteries. Direct carbonization of synthesized three-dimensional Sn-based MOF: K2Sn2(1,4-bdc)(3)](H2O) (1) (bdc = benzenedicarboxylate) resulted in stabilization of tin nanoparticles in a porous carbon matrix (abbreviated as Sn@C). Sn@C exhibited remarkably high electrochemical lithium stability (tested over 100 charge and discharge cycles) and high specific capacities over a wide range of operating currents (0.2-5 Ag-1). The novel synthesis strategy to obtain Sn@C from a single precursor as discussed herein provides an optimal combination of particle size and dispersion for buffering severe volume changes due to Li-Sn alloying reaction and provides fast pathways for lithium and electron transport.
Resumo:
Using first principles calculations, we show that the storage capacity as well as desorption temperature of MOFs can be significantly enhanced by decorating pyridine (a common linker in MOFs) by metal atoms. The storage capacity of metal-pyridine complexes are found to be dependent on the type of decorating metal atom. Among the 3d transition metal atoms, Sc turns out to be the most efficient storing unto four H-2 molecules. Most importantly, Sc does not suffer dimerisation on the surface of pyridine, keeping the storage capacity of every metal atom intact. Based on these findings, we propose a metal-decorated pyridine-based MOFs, which has potential to meet the required H-2 storage capacity for vehicular usage. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Pressure-Induced Bond Rearrangement and Reversible Phase Transformation in a Metal-Organic Framework
Resumo:
Pressure-induced phase transformations (PIPTs) occur in a wide range of materials. In general, the bonding characteristics, before and after the PIPT, remain invariant in most materials, and the bond rearrangement is usually irreversible due to the strain induced under pressure. A reversible PIPT associated with a substantial bond rearrangement has been found in a metal-organic framework material, namely tmenH(2)]Er(HCOO)(4)](2) (tmenH(2)(2+) = N,N,N',N'-tetramethylethylenediammonium). The transition is first-order and is accompanied by a unit cell volume change of about 10%. High-pressure single-crystal X-ray diffraction studies reveal the complex bond rearrangement through the transition. The reversible nature of the transition is confirmed by means of independent nanoindentation measurements on single crystals.
Resumo:
In this paper we present a framework for realizing arbitrary instruction set extensions (IE) that are identified post-silicon. The proposed framework has two components viz., an IE synthesis methodology and the architecture of a reconfigurable data-path for realization of the such IEs. The IE synthesis methodology ensures maximal utilization of resources on the reconfigurable data-path. In this context we present the techniques used to realize IEs for applications that demand high throughput or those that must process data streams. The reconfigurable hardware called HyperCell comprises a reconfigurable execution fabric. The fabric is a collection of interconnected compute units. A typical use case of HyperCell is where it acts as a co-processor with a host and accelerates execution of IEs that are defined post-silicon. We demonstrate the effectiveness of our approach by evaluating the performance of some well-known integer kernels that are realized as IEs on HyperCell. Our methodology for realizing IEs through HyperCells permits overlapping of potentially all memory transactions with computations. We show significant improvement in performance for streaming applications over general purpose processor based solutions, by fully pipelining the data-path. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Human Leukocyte Antigen (HLA) plays an important role, in presenting foreign pathogens to our immune system, there by eliciting early immune responses. HLA genes are highly polymorphic, giving rise to diverse antigen presentation capability. An important factor contributing to enormous variations in individual responses to diseases is differences in their HLA profiles. The heterogeneity in allele specific disease responses decides the overall disease epidemiological outcome. Here we propose an agent based computational framework, capable of incorporating allele specific information, to analyze disease epidemiology. This framework assumes a SIR model to estimate average disease transmission and recovery rate. Using epitope prediction tool, it performs sequence based epitope detection for a given the pathogenic genome and derives an allele specific disease susceptibility index depending on the epitope detection efficiency. The allele specific disease transmission rate, that follows, is then fed to the agent based epidemiology model, to analyze the disease outcome. The methodology presented here has a potential use in understanding how a disease spreads and effective measures to control the disease.