921 resultados para Bookkeeping machines.
Resumo:
The study of molecular machines, and protein complexes in general, is a growth area of biology. Is there a computational method for inferring which combinations of proteins in an organism are likely to form a crystallizable complex? We use the Protein Data Bank (PDB) to assess the usefulness of inferred functional protein linkages for this task. We find that of 242 nonredundant prokaryotic protein complexes (complexes excluding structural variants of the same protein) from organisms that are shared between the current PDB and the Prolinks functional linkage database, 44% (107/242) contain proteins that are linked at high-confidence by one or more methods of computed functional linkages. This suggests that computing functional linkages will be useful in defining protein complexes for structural studies. We offer a database of such inferred linkages corresponding to likely protein complexes for some 629,952 pairs of proteins in 154 prokaryotes and archea.
Resumo:
This paper presents the detailed dynamic digital simulation for the study of phenomenon of torsional interaction between HVDC-Turbine generator shaft, dynamics using the novel converter model presented in [ 1 ] The system model includes detailed representation of the synchronous generator and the shaft dynamics, the ac and dc network transients. The results of a case study indicate the various factors that influence the torsional interaction.
Resumo:
Physics at the Large Hadron Collider (LHC) and the International e(+)e(-) Linear Collider (ILC) will be complementary in many respects, as has been demonstrated at previous generations of hadron and lepton colliders. This report addresses the possible interplay between the LHC and ILC in testing the Standard Model and in discovering and determining the origin of new physics. Mutual benefits for the physics programme at both machines can occur both at the level of a combined interpretation of Hadron Collider and Linear Collider data and at the level of combined analyses of the data, where results obtained at one machine can directly influence the way analyses are carried out at the other machine. Topics under study comprise the physics of weak and strong electroweak symmetry breaking, supersymmetric models, new gauge theories, models with extra dimensions, and electroweak and QCD precision physics. The status of the work that has been carried out within the LHC/ILC Study Group so far is summarized in this report. Possible topics for future studies are outlined.
Resumo:
Linear optimization model was used to calculate seven wood procurement scenarios for years 1990, 2000 and 2010. Productivity and cost functions for seven cutting, five terrain transport, three long distance transport and various work supervision and scaling methods were calculated from available work study reports. All method's base on Nordic cut to length system. Finland was divided in three parts for description of harvesting conditions. Twenty imaginary wood processing points and their wood procurement areas were created for these areas. The procurement systems, which consist of the harvesting conditions and work productivity functions, were described as a simulation model. In the LP-model the wood procurement system has to fulfil the volume and wood assortment requirements of processing points by minimizing the procurement cost. The model consists of 862 variables and 560 restrictions. Results show that it is economical to increase the mechanical work in harvesting. Cost increment alternatives effect only little on profitability of manual work. The areas of later thinnings and seed tree- and shelter wood cuttings increase on cost of first thinnings. In mechanized work one method, 10-tonne one grip harvester and forwarder, is gaining advantage among other methods. Working hours of forwarder are decreasing opposite to the harvester. There is only little need to increase the number of harvesters and trucks or their drivers from today's level. Quite large fluctuations in level of procurement and cost can be handled by constant number of machines, by alternating the number of season workers and by driving machines in two shifts. It is possible, if some environmental problems of large scale summer time harvesting can be solved.
Resumo:
We present an implementation of a multicast network of processors. The processors are connected in a fully connected network and it is possible to broadcast data in a single instruction. The network works at the processor-memory speed and therefore provides a fast communication link among processors. A number of interesting architectures are possible using such a network. We show some of these architectures which have been implemented and are functional. We also show the system software calls which allow programming of these machines in parallel mode.
Resumo:
Tutkimuksessa vertailtiin metsän erirakenteisuutta edistävien poimintahakkuiden ja pienaukkohakkuiden kannattavuutta metsänhoitosuositusten mukaiseen metsänkasvatukseen Keski-Suomessa. Poimintahakkuut ja pienaukkohakkuut ovat menetelmiä, joilla voidaan lisätä luonnonmetsän häiriödynamiikan mukaista pienipiirteistä elinympäristöjen vaihtelua ja siksi ne sopivat etenkin erityiskohteisiin monimuotoisuuden, maiseman tai metsien monikäytön vuoksi. Ne johtavat yleensä vähitellen eri-ikäisrakenteiseen metsään, jossa puuston läpimittaluokkajakauma muistuttaa käänteistä J-kirjainta. Eri-ikäisrakenteisen metsänkäsittelyn taloudellista kannattavuutta puoltavat uudistumiskustannusten poisjäänti ja tukkipuihin painottuvat säännöllisin väliajoin toteutuvat hakkuut. Menetelmän soveltumista Suomen olosuhteisiin pidetään kuitenkin epävarmana. Tässä tutkimuksessa tarkasteltiin tasaikäisrakenteisen metsän muuttamista eri-ikäisrakenteiseksi 40 vuoden siirtymäaikana Metsähallituksen hallinnoimassa Isojäven ympäristöarvometsässä Kuhmoisissa. Tutkimusaineisto koostui 405 kuusivaltaisesta tasaikäisestä kuviosta, joiden pinta-alasta metsämaata on 636 hehtaaria. Metsän kehitystä simuloitiin puutason kasvumalleja käyttäen ja käsittelytoimenpiteet simuloitiin viisivuotiskausittain SIMO-metsäsuunnitteluohjelmistolla. Simulointien avulla selvitettiin jokaisen käsittelyskenaarion hakkuumäärät puutavaralajeittain, diskontatut kassavirrat ja puustopääoman muutos tarkasteluajanjakson aikana. Puunkorjuun yksikkökustannusten laskennan apuna käytettiin automatisoitua seurantajärjestelmää, jossa metsäkoneisiin asennettuilla matkapuhelimilla kerättiin MobiDoc2-sovelluksella metsäkoneiden käytöstä kiihtyvyystiedot, GPS-paikkatiedot ja syötetiedot. Lopulta jokaiselle käsittelyskenaariolle laskettiin metsän puuntuotannollista arvoa kuvaavalla tuottoarvon yhtälöllä nettonykyarvot, josta vähennettiin diskontatut puunkorjuun kustannukset. Tutkimuksen tulosten mukaan poimintahakkuun NPV oli 3 prosentin korkokannalla noin 91 % (7420 €/ha) ja pienaukkohakkuiden noin 99 % (8076 €/ha) metsänhoitosuositusten mukaisesta käsittelystä (8176 €/ha). Komparatiivinen statiikka osoitti, että korkokannan kasvattaminen 5 prosenttiin ei olennaisesti lisännyt nettonykyarvojen eroja. Poimintahakkuiden puunkorjuun yksikkökustannukset olivat 0,8 €/m3 harvennushakkuita pienemmät ja 7,2 €/m3 uudistushakkuita suuremmat. Pienaukkohakkuiden yksikkökustannukset olivat 0,7 €/m3 uudistushakkuita suuremmat.Tulosten perusteella on väistämätöntä että siirtymävaihe tasaikäisrakenteisesta eri-ikäisrakenteiseksi metsäksi aiheuttaa taloudellisia tappioita siitäkin huolimatta, että hakkuut ovat voimakkaita ja tehdään varttuneeseen kasvatusmetsään. Tappion määrä on metsän peitteisyyden ylläpidosta aiheutuva vaihtoehtoiskustannus.
Resumo:
Menneinä vuosikymmeninä maatalouden työt ovat ensin koneellistuneet voimakkaasti ja sittemmin mukaan on tullut automaatio. Nykyään koneiden kokoa suurentamalla ei enää saada tuottavuutta nostettua merkittävästi, vaan työn tehostaminen täytyy tehdä olemassa olevien resurssien käyttöä tehostamalla. Tässä työssä tarkastelun kohteena on ajosilppuriketju nurmisäilörehun korjuussa. Säilörehun korjuun intensiivisyys ja koneyksiköiden runsas määrä ovat työnjohdon kannalta vaativa yhdistelmä. Työn tavoitteena oli selvittää vaatimuksia maatalouden urakoinnin tueksi kehitettävälle tiedonhallintajärjestelmälle. Tutkimusta varten haastateltiin yhteensä 12 urakoitsijaa tai yhteistyötä tekevää viljelijää. Tutkimuksen perusteella urakoitsijoilla on tarvetta tietojärjestelmille.Luonnollisesti urakoinnin laajuus ja järjestelyt vaikuttavat asiaan. Tutkimuksen perusteella keskeisimpiä vaatimuksia tiedonhallinnalle ovat: • mahdollisimman laaja, yksityiskohtainen ja automaattinen tiedon keruu tehtävästä työstä • karttapohjaisuus, kuljettajien opastus kohteisiin • asiakasrekisteri, työn tilaus sähköisesti • tarjouspyyntöpohjat, hintalaskurit • luotettavuus, tiedon säilyvyys • sovellettavuus monenlaisiin töihin • yhteensopivuus muiden järjestelmien kanssa Kehitettävän järjestelmän tulisi siis tutkimuksen perusteella sisältää seuraavia osia: helppokäyttöinen suunnittelu/asiakasrekisterityökalu, toimintoja koneiden seurantaan, opastukseen ja johtamiseen, työnaikainen tiedonkeruu sekä kerätyn tiedon käsittelytoimintoja. Kaikki käyttäjät eivät kuitenkaan tarvitse kaikkia toimintoja, joten urakoitsijan on voitava valita tarvitsemansa osat ja mahdollisesti lisätä toimintoja myöhemmin. Tiukoissa taloudellisissa ja ajallisissa raameissa toimivat urakoitsijat ovat vaativia asiakkaita, joiden käyttämän tekniikan tulee olla toimivaa ja luotettavaa. Toisaalta inhimillisiä virheitä sattuu kokeneillekin, joten hyvällä tietojärjestelmällä työstä tulee helpompaa ja tehokkaampaa.
Resumo:
This paper is concerned with the influence of different levels of complexity in modelling various constituent subsystems on the dynamic stability of power systems compensated by static var systems (SVS) operating on pure voltage control. The system components investigated include thyristor controlled reactor (TCR) transients, SVS delays, network transients, the synchronous generator and automatic voltage regulator (AVR). An overall model is proposed which adequately describes the system performance for small signal perturbations. The SVS performance is validated through detailed nonlinear simulation on a physical simulator.
Resumo:
Torsional interactions can occur due to the speed input Power System Stabilizer (PSS) that are primarily used to damp low frequency oscillations. The solution to this problem can be either in the form of providing a torsional filter or developing an alternate signal for the PSS. This paper deals with the formulation of a linearized state space model of the system and study of the interactions using eigenvalue analysis. The effects of the parameters of PSS and control signals on the damping of torsional modes are investigated.
Resumo:
Clustered VLIW architectures solve the scalability problem associated with flat VLIW architectures by partitioning the register file and connecting only a subset of the functional units to a register file. However, inter-cluster communication in clustered architectures leads to increased leakage in functional components and a high number of register accesses. In this paper, we propose compiler scheduling algorithms targeting two previously ignored power-hungry components in clustered VLIW architectures, viz., instruction decoder and register file. We consider a split decoder design and propose a new energy-aware instruction scheduling algorithm that provides 14.5% and 17.3% benefit in the decoder power consumption on an average over a purely hardware based scheme in the context of 2-clustered and 4-clustered VLIW machines. In the case of register files, we propose two new scheduling algorithms that exploit limited register snooping capability to reduce extra register file accesses. The proposed algorithms reduce register file power consumption on an average by 6.85% and 11.90% (10.39% and 17.78%), respectively, along with performance improvement of 4.81% and 5.34% (9.39% and 11.16%) over a traditional greedy algorithm for 2-clustered (4-clustered) VLIW machine. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Presented here, in a vector formulation, is an O(mn2) direct concise algorithm that prunes/identifies the linearly dependent (ld) rows of an arbitrary m X n matrix A and computes its reflexive type minimum norm inverse A(mr)-, which will be the true inverse A-1 if A is nonsingular and the Moore-Penrose inverse A+ if A is full row-rank. The algorithm, without any additional computation, produces the projection operator P = (I - A(mr)- A) that provides a means to compute any of the solutions of the consistent linear equation Ax = b since the general solution may be expressed as x = A(mr)+b + Pz, where z is an arbitrary vector. The rank r of A will also be produced in the process. Some of the salient features of this algorithm are that (i) the algorithm is concise, (ii) the minimum norm least squares solution for consistent/inconsistent equations is readily computable when A is full row-rank (else, a minimum norm solution for consistent equations is obtainable), (iii) the algorithm identifies ld rows, if any, and reduces concerned computation and improves accuracy of the result, (iv) error-bounds for the inverse as well as the solution x for Ax = b are readily computable, (v) error-free computation of the inverse, solution vector, rank, and projection operator and its inherent parallel implementation are straightforward, (vi) it is suitable for vector (pipeline) machines, and (vii) the inverse produced by the algorithm can be used to solve under-/overdetermined linear systems.
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
Application of differential geometry to study the dynamics of electrical machines by Gabriel Kron evoked only theoretical interest among the power system engineers and was considered hardly suitable for any practical use. Extension of Kron's work led to a physical understanding of the processes governing the small oscillation instability in power system. This in turn has made it possible to design a self-tuning Power System Stabilizer to contain the oscillatory instability over arm extended range of system and operating conditions. This paper briefly recounts the history of this development and touches upon the essential design features of the stabilizer. It presents some results from simulation studies, laboratory experiments and recently conducted field trials at actual plants-all of which help to establish the efficacy of the proposed stabilizer and corroborate the theoretical findings.
Resumo:
CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.
Resumo:
Code Division Multiple Access (CDMA) techniques, by far, had been applied to LAN problems by many investigators, An analytical study of well known algorithms for generation of Orthogonal codes used in FO-CDMA systems like those for prime, quasi-Prime, Optical Orthogonal and Matrix codes has been presented, Algorithms for OOCs like Greedy/Modified Greedy/Accelerated Greedy algorithms are implemented. Many speed-up enhancements. for these algorithms are suggested. A novel Synthetic Algorithm based on Difference Sets (SADS) is also proposed. Investigations are made to vectorise/parallelise SADS to implement the source code on parallel machines. A new matrix for code families of OOCs with different seed code-words but having the same (n,w,lambda) set is formulated.