953 resultados para computer forensics tools
Resumo:
A hybrid computer for structure factor calculations in X-ray crystallography is described. The computer can calculate three-dimensional structure factors of up to 24 atoms in a single run and can generate the scatter functions of well over 100 atoms using Vand et al., or Forsyth and Wells approximations. The computer is essentially a digital computer with analog function generators, thus combining to advantage the economic data storage of digital systems and simple computing circuitry of analog systems. The digital part serially selects the data, computes and feeds the arguments into specially developed high precision digital-analog function generators, the outputs of which being d.c. voltages, are further processed by analog circuits and finally the sequential adder, which employs a novel digital voltmeter circuit, converts them back into digital form and accumulates them in a dekatron counter which displays the final result. The computer is also capable of carrying out 1-, 2-, or 3-dimensional Fourier summation, although in this case, the lack of sufficient storage space for the large number of coefficients involved, is a serious limitation at present.
Resumo:
Mass spectrometry (MS) became a standard tool for identifying metabolites in biological tissues, and metabolomics is slowly acknowledged as a legitimate research discipline for characterizing biological conditions. The computational analyses of metabolomics, however, lag behind compared with the rapid advances in analytical aspects for two reasons. First is the lack of standardized data repository for mass spectra: each research institution is flooded with gigabytes of mass-spectral data from its own analytical groups and cannot host a world-class repository for mass spectra. The second reason is the lack of informatics experts that are fully experienced with spectral analyses. The two barriers must be overcome to establish a publicly free data server for MS analysis in metabolomics as does GenBank in genomics and UniProt in proteomics. The workshop brought together bioinformaticians working on mass spectral analyses in Finland and Japan with the goal to establish a consortium to freely exchange and publicize mass spectra of metabolites measured on various platforms computational tools to analyze spectra spectral knowledge that are computationally predicted from standardized data. This book contains the abstracts of the presentations given in the workshop. The programme of the workshop consisted of oral presentations from Japan and Finland, invited lectures from Steffen Neumann (Leibniz Institute of Plant Biochemistry), Matej Oresic (VTT), Merja Penttila (VTT) and Nicola Zamboni (ETH Zurich) as well as free form discussion among the participants. The event was funded by Academy of Finland (grants 139203 and 118653), Japan Society for the Promotion of Science (JSPS Japan-Finland Bilateral Semi- nar Program 2010) and Department of Computer Science University of Helsinki. We would like to thank all the people contributing to the technical pro- gramme and the sponsors for making the workshop possible. Helsinki, October 2010 Masanori Arita, Markus Heinonen and Juho Rousu
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
In smaller countries where the key players in construction IT development tend to know each other personally and where public R&D funding is concentrated to a few channels, IT roadmaps and strategies would seem to have a better chance of influencing development than in the bigger industrial countries. In this paper Finland and the RATAS-project is presented as a historical case illustrating such impact. RATAS was initiated as a construction IT roadmap project in 1985, involving many of the key organisations and companies active in construction sector development. Several of the individuals who took an active part in the project have played an important role in later developments both in Finland and on the international scene. The central result of RATAS was the identification of what is nowadays called Building Information Modelling (BIM) technology as the central issue in getting IT into efficient use in the construction sector. BIM, which earlier was referred to as building product modelling, has been a key ingredient in many roadmaps since and the subject of international standardisation efforts such as STEP and IAI/IFCs. The RATAS project can in hindsight be seen as a forerunner with an impact which also transcended national borders.
Resumo:
In the recent time CFD tools have become increasingly useful in the engineering design studies especially in the area of aerospace vehicles. This is largely due to the advent of high speed computing platforms in addition to the development of new efficient algorithms. The algorithms based on kinetic schemes have been shown to be very robust and further meshless methods offer certain advantages over the other methods. Preliminary investigations of blood flow visualization through artery using CFD tool have shown encouraging results which further needs to be verified and validated.
Resumo:
The modes of binding of alpha- and beta-anomers of D-galactose, D-fucose and D-glucose to L-arabinose-binding protein (ABP) have been studied by energy minimization using the low resolution (2.4 A) X-ray data of the protein. These studies suggest that these sugars preferentially bind in the alpha-form to ABP, unlike L-arabinose where both alpha- and beta-anomers bind almost equally. The best modes of binding of alpha- and beta-anomers of D-galactose and D-fucose differ slightly in the nature of the possible hydrogen bonds with the protein. The residues Arg 151 and Asn 232 of ABP from bidentate hydrogen bonds with both L-arabinose and D-galactose, but not with D-fucose or D-glucose. However in the case of L-arabinose, Arg 151 forms hydrogen bonds with the hydroxyl group at the C-4 atom and the ring oxygen, whereas in case of D-galactose it forms bonds with the hydroxyl groups at the C-4 and C-6 atoms of the pyranose ring. The calculated conformational energies also predict that D-galactose is a better inhibitor than D-fucose and D-glucose, in agreement with kinetic studies. The weak inhibitor D-glucose binds preferentially to one domain of ABP leading to the formation of a weaker complex. Thus these studies provide information about the most probable binding modes of these sugars and also provide a theoretical explanation for the observed differences in their binding affinities.
Resumo:
The CCEM method (Contact Criteria and Energy Minimisation) has been developed and applied to study protein-carbohydrate interactions. The method uses available X-ray data even on the native protein at low resolution (above 2.4 Å) to generate realistic models of a variety of proteins with various ligands.The two examples discussed in this paper are arabinose-binding protein (ABP) and pea lectin. The X-ray crystal structure data reported on ABP-β-l-arabinose complex at 2.8, 2.4 and 1.7 Å resolution differ drastically in predicting the nature of the interactions between the protein and ligand. It is shown that, using the data at 2.4 Å resolution, the CCEM method generates complexes which are as good as the higher (1.7 Å) resolution data. The CCEM method predicts some of the important hydrogen bonds between the ligand and the protein which are missing in the interpretation of the X-ray data at 2.4 Å resolution. The theoretically predicted hydrogen bonds are in good agreement with those reported at 1.7 Å resolution. Pea lectin has been solved only in the native form at 3 Å resolution. Application of the CCEM method also enables us to generate complexes of pea lectin with methyl-α-d-glucopyranoside and methyl-2,3-dimethyl-α-d-glucopyranoside which explain well the available experimental data in solution.
Resumo:
Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].
Resumo:
Metallic glasses are of interest because of their mechanical properties. They are ductile as well as brittle. This is true of Pd77.5Cu6Si16.5, a ternary glassy alloy. Actually, the most stable metallic glasses are those which are alloys of noble or transition metals A general formula is postulated as T70–80G30-20where T stands for one or several 3d transition elements, and includes the metalloid glass formers. Another general formula is A3B to A5B where B is a metalloid. A computer method utilising the MIGAP computer program of Kaufman is used to calculate the miscibility gap over a range of temperatures. The precipitation of a secondary crystalline phase is postulated around 1500K. This could produce a dispersed phase composite with interesting high temperature-strength properties.
Resumo:
The quaternary system Sb1bTe1bBi1bSe with small amounts of suitable dopants is of interest for the manufacture of thermoelectric modules which exhibit the Peltier and Seebeck effects. This property could be useful in the production of energy from the thermoelectric effect. Other substances are bismuth telluride (Bi2Te3) and Sb1bTe1bBi and compounds such as ZnIn2Se4. In the present paper the application of computer programs such as MIGAP of Kaufman is used to indicate the stability of the ternary limits of Sb1bTe1bBi within the temperature ranges of interest, namely 273 K to 300 K.
Resumo:
The compounds CdHgTe and its constituent binaries CdTe, HgTe, and CdHg are semiconductors which are used in thermal, infrared, nuclear, thermoelectric and other photo sensitive devices. The compound CdHgTe has a Sphaleritic structure of possible type A1IIB1IIC6VI. The TERCP program of Kaufman is used to estimate the stable regions of the ternary phase diagram using available thermodynamic data. It was found that there was little variation in stochiometry with temperature. The compositions were calculated for temperatures ranging from 325K to 100K and the compositional limits were Cd13−20Hg12−01Te75−79, Hg varying most. By comparison with a similar compound, Cd In2Te4 of forbidden band width. 88 to .90 e.V., similar properties are postulated for Cd1Hg1Te6 with applications in the infra red region of the spectrum at 300K where this composition is given by TERCP at the limit of stability.