37 resultados para Apple Store
em Indian Institute of Science - Bangalore - Índia
Resumo:
Suppression of the aggregation of proteins has tremendous implications in biology and medicine. In the pharmaceuticals industry, aggregation of therapeutically important proteins and peptides while stored, reduces the efficacy and promptness of action leading to, in many instances, intoxication of the patient by the aggregate. Here we report the effect of gold nanoparticles (Au-NPs) in preventing the thermal and chemical aggregation of two unrelated proteins of different size, alcohol dehydrogenase (ADH, 84 kDa) and insulin (6 kDa), respectively, in physiological pH. Our principal observation is that there is a significant reduction (up to 95%) in the extent of aggregation of ADH and insulin in the presence of gold nanoparticles (Au-NPs). Aggregation of these proteins at micromolar concentration is prevented using nanomolar or less amounts of gold nanoparticles which is remarkable since chaperones which prevent such aggregation in vivo are required in micromolar quantity. The prevention of aggregation of these two different proteins under two different denaturing environments has established the role of Au-NPs as a protein aggregation prevention agent. The extent of prevention increases rapidly with the increase in the size of the gold nanoparticles. Protein molecules get physisorbed on the gold nanoparticle surface and thus become inaccessible by the denaturing agent in solution. This adsorption of proteins on AuNPs has been established by a variety of techniques and assays.
Resumo:
Database management systems offer a very reliable and attractive data organization for fast and economical information storage and processing for diverse applications. It is much more important that the information should be easily accessible to users with varied backgrounds, professional as well as casual, through a suitable data sublanguage. The language adopted here (APPLE) is one such language for relational database systems and is completely nonprocedural and well suited to users with minimum or no programming background. This is supported by an access path model which permits the user to formulate completely nonprocedural queries expressed solely in terms of attribute names. The data description language (DDL) and data manipulation language (DML) features of APPLE are also discussed. The underlying relational database has been implemented with the help of the DATATRIEVE-11 utility for record and domain definition which is available on the PDP-11/35. The package is coded in Pascal and MACRO-11. Further, most of the limitations of the DATATRIEVE-11 utility have been eliminated in the interface package.
Resumo:
The benefits that accrue from the use of design database include (i) reduced costs of preparing data for application programs and of producing the final specification, and (ii) possibility of later usage of data stored in the database for other applications related to Computer Aided Engineering (CAE). An INTEractive Relational GRAphics Database (INTERGRAD) based on relational models has been developed to create, store, retrieve and update the data related to two dimensional drawings. INTERGRAD provides two languages, Picture Definition Language (PDL) and Picture Manipulation Language (PML). The software package has been implemented on a PDP 11/35 system under the RSX-11M version 3.1 operating system and uses the graphics facility consisting of a VT-11 graphics terminal, the DECgraphic 11 software and an input device, a lightpen.
Resumo:
Contention-based multiple access is a crucial component of many wireless systems. Multiple-packet reception (MPR) schemes that use interference cancellation techniques to receive and decode multiple packets that arrive simultaneously are known to be very efficient. However, the MPR schemes proposed in the literature require complex receivers capable of performing advanced signal processing over significant amounts of soft undecodable information received over multiple contention steps. In this paper, we show that local channel knowledge and elementary received signal strength measurements, which are available to many receivers today, can actively facilitate multipacket reception and even simplify the interference canceling receiver¿s design. We introduce two variants of a simple algorithm called Dual Power Multiple Access (DPMA) that use local channel knowledge to limit the receive power levels to two values that facilitate successive interference cancellation. The resulting receiver structure is markedly simpler, as it needs to process only the immediate received signal without having to store and process signals received previously. Remarkably, using a set of three feedback messages, the first variant, DPMA-Lite, achieves a stable throughput of 0.6865 packets per slot. Using four possible feedback messages, the second variant, Turbo-DPMA, achieves a stable throughput of 0.793 packets per slot, which is better than all contention algorithms known to date.
Resumo:
This paper presents a simple hybrid computer technique to study the transient behaviour of queueing systems. This method is superior to stand-alone analog or digital solution because the hardware requirement is excessive for analog technique whereas computation time is appreciable in the latter case. By using a hybrid computer one can share the analog hardware thus requiring fewer integrators. The digital processor can store the values, play them back at required time instants and change the coefficients of differential equations. By speeding up the integration on the analog computer it is feasible to solve a large number of these equations very fast. Hybrid simulation is even superior to the analytic technique because in the latter case it is difficult to solve time-varying differential equations.
Resumo:
We study a scheduling problem in a wireless network where vehicles are used as store-and-forward relays, a situation that might arise, for example, in practical rural communication networks. A fixed source node wants to transfer a file to a fixed destination node, located beyond its communication range. In the absence of any infrastructure connecting the two nodes, we consider the possibility of communication using vehicles passing by. Vehicles arrive at the source node at renewal instants and are known to travel towards the destination node with average speed v sampled from a given probability distribution. Th source node communicates data packets (or fragments) of the file to the destination node using these vehicles as relays. We assume that the vehicles communicate with the source node and the destination node only, and hence, every packet communication involves two hops. In this setup, we study the source node's sequential decision problem of transferring packets of the file to vehicles as they pass by, with the objective of minimizing delay in the network. We study both the finite file size case and the infinite file size case. In the finite file size case, we aim to minimize the expected file transfer delay, i.e. expected value of the maximum of the packet sojourn times. In the infinite file size case, we study the average packet delay minimization problem as well as the optimal tradeoff achievable between the average queueing delay at the source node buffer and the average transit delay in the relay vehicle.
Resumo:
Context sensitive pointer analyses based on Whaley and Lam’s bddbddb system have been shown to scale to large Java programs. We provide a technique to incorporate flow sensitivity for Java fields into one such analysis and obtain an escape analysis based on it. First, we express an intraprocedural field flow sensitive analysis, using Fink et al.’s Heap Array SSA form in Datalog. We then extend this analysis interprocedurally by introducing two new φ functions for Heap Array SSA Form and adding deduction rules corresponding to them. Adding a few more rules gives us an escape analysis. We describe two types of field flow sensitivity: partial (PFFS) and full (FFFS), the former without strong updates to fields and the latter with strong updates. We compare these analyses with two different (field flow insensitive) versions of Whaley-Lam analysis: one of which is flow sensitive for locals (FS) and the other, flow insensitive for locals (FIS). We have implemented this analysis on the bddbddb system while using the SOOT open source framework as a front end. We have run our analysis on a set of 15 Java programs. Our experimental results show that the time taken by our field flow sensitive analyses is comparable to that of the field flow insensitive versions while doing much better in some cases. Our PFFS analysis achieves average reductions of about 23% and 30% in the size of the points-to sets at load and store statements respectively and discovers 71% more “caller-captured” objects than FIS.
Resumo:
When the male is the heterogametic sex (XX♀-XY♂ or XX♀-XO♂), as inDrosophila, orthopteran insects, mammals andCaenorhabditis elegans, X-linked genes are subject to dosage compensation: the single X in the male is functionally equivalent to the two Xs in the female. However, when the female is heterogametic (ZZ♂-ZW♀), as in birds, butterflies and moths, Z-linked genes are apparently not dosage-compensated. This difference between X-linked and Z-linked genes raises fundamental questions about the role of dosage compensation. It is argued that (i) genes which require dosage compensation are primarily those that control morphogenesis and the prospective body plan; (ii) the products of these genes are required in disomic doses especially during oogenesis and early embryonic development; (iii) heterogametic females synthesize and store during oogenesis itself morphogenetically essential gene products - including those encoded by Z-linked genes — in large quantities; (iv) the abundance of these gene products in the egg and their persistence relatively late into embryogenesis enables heterogametic females to overcome the monosomic state of the Z chromosome in ZW embryos. Female heterogamety is predominant in birds, reptiles and amphibians, all of which have megalecithal eggs containing several thousand times more maternal RNA and other maternal messages than eggs of mammals,Caenorhabditis elegans, orDrosophila. This increase in egg size, yolk content and, concomitantly, the size of the maternal legacy to the embryo, may have facilitated female heterogamety and the absence of dosage compensation.
Resumo:
An associative memory with parallel architecture is presented. The neurons are modelled by perceptrons having only binary, rather than continuous valued input. To store m elements each having n features, m neurons each with n connections are needed. The n features are coded as an n-bit binary vector. The weights of the n connections that store the n features of an element has only two values -1 and 1 corresponding to the absence or presence of a feature. This makes the learning very simple and straightforward. For an input corrupted by binary noise, the associative memory indicates the element that is closest (in terms of Hamming distance) to the noisy input. In the case where the noisy input is equidistant from two or more stored vectors, the associative memory indicates two or more elements simultaneously. From some simple experiments performed on the human memory and also on the associative memory, it can be concluded that the associative memory presented in this paper is in some respect more akin to a human memory than a Hopfield model.
Resumo:
With construction of a thermochemical energy conversion prototype system to store solar heat, thermal dissociation of pellets of Ca(OH)2 and hydration of CaO have been investigated in some detail for its application to the system. The inorganic substance is very attractive as a material for long term heat storage, but molar density changes associated with the reaction are fairly large. Therefore, this factor has been taken into account in the kinetic equation. The importance of additives and pellet size has been discussed considering reactivity and strength of pellets. An analysis has been attempted when chemical reaction is important. The deformation of pellets was observed during hydration.
Explicit and Optimal Exact-Regenerating Codes for the Minimum-Bandwidth Point in Distributed Storage
Resumo:
In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading beta units of data from each. Dimakis et al. show that the repair bandwidth d beta can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth. In the exact regeneration variation, unlike the functional regeneration, the replacement for a failed node is required to store data identical to that in the failed node. This greatly reduces the complexity of system maintenance. The main result of this paper is an explicit construction of codes for all values of the system parameters at one of the two most important and extreme points of the tradeoff - the Minimum Bandwidth Regenerating point, which performs optimal exact regeneration of any failed node. A second result is a non-existence proof showing that with one possible exception, no other point on the tradeoff can be achieved for exact regeneration.
Resumo:
CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.
Resumo:
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.
Resumo:
A Wireless Sensor Network (WSN) powered using harvested energies is limited in its operation by instantaneous power. Since energy availability can be different across nodes in the network, network setup and collaboration is a non trivial task. At the same time, in the event of excess energy, exciting node collaboration possibilities exist; often not feasible with battery driven sensor networks. Operations such as sensing, computation, storage and communication are required to achieve the common goal for any sensor network. In this paper, we design and implement a smart application that uses a Decision Engine, and morphs itself into an energy matched application. The results are based on measurements using IRIS motes running on solar energy. We have done away with batteries; instead used low leakage super capacitors to store harvested energy. The Decision Engine utilizes two pieces of data to provide its recommendations. Firstly, a history based energy prediction model assists the engine with information about in-coming energy. The second input is the energy cost database for operations. The energy driven Decision Engine calculates the energy budgets and recommends the best possible set of operations. Under excess energy condition, the Decision Engine, promiscuously sniffs the neighborhood looking for all possible data from neighbors. This data includes neighbor's energy level and sensor data. Equipped with this data, nodes establish detailed data correlation and thus enhance collaboration such as filling up data gaps on behalf of nodes hibernating under low energy conditions. The results are encouraging. Node and network life time of the sensor nodes running the smart application is found to be significantly higher compared to the base application.