915 resultados para Computer Programs
Resumo:
The Ball-Larus path-profiling algorithm is an efficient technique to collect acyclic path frequencies of a program. However, longer paths -those extending across loop iterations - describe the runtime behaviour of programs better. We generalize the Ball-Larus profiling algorithm for profiling k-iteration paths - paths that can span up to to k iterations of a loop. We show that it is possible to number suchk-iteration paths perfectly, thus allowing for an efficient profiling algorithm for such longer paths. We also describe a scheme for mixed-mode profiling: profiling different parts of a procedure with different path lengths. Experimental results show that k-iteration profiling is realistic.
Resumo:
This doctoral thesis aims to demonstrate the importance of incentives to technology-based firms as a strategy to promote knowledge-based economic development (KBED). To remain competitive, technology-based firms must innovate and seek new markets; therefore, this study aims to propose an incentive model to technology-based firms as a strategy to promote knowledge-based urban development, according to framework described by Yigitcanlar (2011). This is an exploratory and descriptive research with a qualitative approach. Surveys were carried out with national trade associations that represented technology-based firms both in Brazil and Australia. After analysing the surveys, structured interviews were conducted with government representatives, trade associations and businessmen who had used financial support by the federal government. When comparing both countries, the study found the importance of direct incentives through tax incentives, for it is a less bureaucratic, quicker and more direct process for firms. We suggest to include the terms incentives in the framework of knowledge-based urban development, as one of the pillars that contribute to knowledge-based economic development.
Resumo:
This paper investigates how students’ learning experience can be enhanced by participating in the Industry-Based Learning (IBL) program. In this program, the university students coming into the industry to experience how the business is run. The students’ learning media is now not coming from the textbooks or the lecturers but from learning by doing. This new learning experience could be very interesting for students but at the same time could also be challenging. The research involves interviewing a number of students from the IBL programs, the academic staff from the participated university who has experience in supervising students and the employees of the industry who supported and supervised the students in their work placements. The research findings offer useful insights and create new knowledge in the field of education and learning. The research contributes to the existing knowledge by providing a new understanding of the topic as it applied to the Indonesian context.
Resumo:
The collaboration between universities and industries has become increasingly important for the development of Science and Technology. This is particularly more prominent in the Science Technology Engineering and Mathematics (STEM) disciplines. Literature suggest that the key element of University-Industry Partnership (UIP) is the exchange of knowledge that is mutually beneficial for both parties. One real example of the collaborations is Industry-Based Learning (IBL) in which university students are coming into industries to experience and learn how the skills and knowledge acquired in the classroom are implemented in work places. This paper investigate how the University-Industry Collaboration program is implemented though Industry-Based Learning (IBL) at Indonesian Universities. The research findings offer useful insights and create a new knowledge in the field of STEM education and collaborative learning. The research will contribute to existing knowledge by providing empirical understanding of this topic. The outcomes can be used to improve the quality of University-Industry Partnership programs at Indonesian Universities and inform Indonesian higher education authorities and their industrial partners of an alternative approach to enhance their IBL programs.
Resumo:
A diversity of programs oriented to young people seek to develop their capacities and their connection to the communities in which they live. Some focus on ameliorating a particular issue or ‘deficit’ whilst others, such as sporting, recreation and youth groups are more grounded in the community. This article reports a qualitative study undertaken in three remote Indigenous communities in Central Australia. Sixty interviews were conducted with a range of stakeholders involved in a diversity of youth programs. A range of critical challenges for and characteristics of remote Indigenous youth programs are identified if such programs are to be ‘fit for context’. ‘Youth centred-context specific’ provides a positive frame for the delivery of youth programs in remote Central Australia, encouraging an explicit focus on program logic; program content and processes; and relational, temporal, and, spatial aspects of the practice context. These provide lenses with which youth program planning and delivery may be enhanced in remote communities. Culturally safe service planning and delivery suggests locally determined processes for decision-making and community ownership. In some cases, this may mean a community preference for all ages to access the service to engage in culturally relevant activities. Where activities are targeted at young people, yet open to and inclusive of all ages, they provide a medium for cross-generational interaction that requires a high degree of flexibility on the part of staff and funding programs. Although the findings are focused in Central Australia, they may be relevant to similar contexts elsewhere.
Resumo:
A large fraction of an XML document typically consists of text data. The XPath query language allows text search via the equal, contains, and starts-with predicates. Such predicates can be efficiently implemented using a compressed self-index of the document's text nodes. Most queries, however, contain some parts querying the text of the document, plus some parts querying the tree structure. It is therefore a challenge to choose an appropriate evaluation order for a given query, which optimally leverages the execution speeds of the text and tree indexes. Here the SXSI system is introduced. It stores the tree structure of an XML document using a bit array of opening and closing brackets plus a sequence of labels, and stores the text nodes of the document using a global compressed self-index. On top of these indexes sits an XPath query engine that is based on tree automata. The engine uses fast counting queries of the text index in order to dynamically determine whether to evaluate top-down or bottom-up with respect to the tree structure. The resulting system has several advantages over existing systems: (1) on pure tree queries (without text search) such as the XPathMark queries, the SXSI system performs on par or better than the fastest known systems MonetDB and Qizx, (2) on queries that use text search, SXSI outperforms the existing systems by 1-3 orders of magnitude (depending on the size of the result set), and (3) with respect to memory consumption, SXSI outperforms all other systems for counting-only queries.
Resumo:
A repetitive sequence collection is one where portions of a base sequence of length n are repeated many times with small variations, forming a collection of total length N. Examples of such collections are version control data and genome sequences of individuals, where the differences can be expressed by lists of basic edit operations. Flexible and efficient data analysis on a such typically huge collection is plausible using suffix trees. However, suffix tree occupies O(N log N) bits, which very soon inhibits in-memory analyses. Recent advances in full-text self-indexing reduce the space of suffix tree to O(N log σ) bits, where σ is the alphabet size. In practice, the space reduction is more than 10-fold, for example on suffix tree of Human Genome. However, this reduction factor remains constant when more sequences are added to the collection. We develop a new family of self-indexes suited for the repetitive sequence collection setting. Their expected space requirement depends only on the length n of the base sequence and the number s of variations in its repeated copies. That is, the space reduction factor is no longer constant, but depends on N / n. We believe the structures developed in this work will provide a fundamental basis for storage and retrieval of individual genomes as they become available due to rapid progress in the sequencing technologies.
Resumo:
A hybrid computer for structure factor calculations in X-ray crystallography is described. The computer can calculate three-dimensional structure factors of up to 24 atoms in a single run and can generate the scatter functions of well over 100 atoms using Vand et al., or Forsyth and Wells approximations. The computer is essentially a digital computer with analog function generators, thus combining to advantage the economic data storage of digital systems and simple computing circuitry of analog systems. The digital part serially selects the data, computes and feeds the arguments into specially developed high precision digital-analog function generators, the outputs of which being d.c. voltages, are further processed by analog circuits and finally the sequential adder, which employs a novel digital voltmeter circuit, converts them back into digital form and accumulates them in a dekatron counter which displays the final result. The computer is also capable of carrying out 1-, 2-, or 3-dimensional Fourier summation, although in this case, the lack of sufficient storage space for the large number of coefficients involved, is a serious limitation at present.
Resumo:
Layering is a widely used method for structuring data in CAD-models. During the last few years national standardisation organisations, professional associations, user groups for particular CAD-systems, individual companies etc. have issued numerous standards and guidelines for the naming and structuring of layers in building design. In order to increase the integration of CAD data in the industry as a whole ISO recently decided to define an international standard for layer usage. The resulting standard proposal, ISO 13567, is a rather complex framework standard which strives to be more of a union than the least common denominator of the capabilities of existing guidelines. A number of principles have been followed in the design of the proposal. The first one is the separation of the conceptual organisation of information (semantics) from the way this information is coded (syntax). The second one is orthogonality - the fact that many ways of classifying information are independent of each other and can be applied in combinations. The third overriding principle is the reuse of existing national or international standards whenever appropriate. The fourth principle allows users to apply well-defined subsets of the overall superset of possible layernames. This article describes the semantic organisation of the standard proposal as well as its default syntax. Important information categories deal with the party responsible for the information, the type of building element shown, whether a layer contains the direct graphical description of a building part or additional information needed in an output drawing etc. Non-mandatory information categories facilitate the structuring of information in rebuilding projects, use of layers for spatial grouping in large multi-storey projects, and storing multiple representations intended for different drawing scales in the same model. Pilot testing of ISO 13567 is currently being carried out in a number of countries which have been involved in the definition of the standard. In the article two implementations, which have been carried out independently in Sweden and Finland, are described. The article concludes with a discussion of the benefits and possible drawbacks of the standard. Incremental development within the industry, (where ”best practice” can become ”common practice” via a standard such as ISO 13567), is contrasted with the more idealistic scenario of building product models. The relationship between CAD-layering, document management product modelling and building element classification is also discussed.
Resumo:
In smaller countries where the key players in construction IT development tend to know each other personally and where public R&D funding is concentrated to a few channels, IT roadmaps and strategies would seem to have a better chance of influencing development than in the bigger industrial countries. In this paper Finland and the RATAS-project is presented as a historical case illustrating such impact. RATAS was initiated as a construction IT roadmap project in 1985, involving many of the key organisations and companies active in construction sector development. Several of the individuals who took an active part in the project have played an important role in later developments both in Finland and on the international scene. The central result of RATAS was the identification of what is nowadays called Building Information Modelling (BIM) technology as the central issue in getting IT into efficient use in the construction sector. BIM, which earlier was referred to as building product modelling, has been a key ingredient in many roadmaps since and the subject of international standardisation efforts such as STEP and IAI/IFCs. The RATAS project can in hindsight be seen as a forerunner with an impact which also transcended national borders.
Resumo:
The study addressed a phenomenon that has become common marketing practice, customer loyalty programs. Although a common type of consumer relationship, there is limited knowledge of its nature. The purpose of the study was to create structured understanding of the nature of customer relationships from both the provider’s and the consumer’s viewpoints by studying relationship drivers and proposing the concept of relational motivation as a provider of a common framework for the analysis of these views. The theoretical exploration focused on reasons for engaging in customer relationships for both the consumer and the provider. The themes of buying behaviour, industrial and network marketing and relationship marketing, as well as the concepts of a customer relationship, customer loyalty, relationship conditions, relational benefits, bonds and commitment were explored and combined in a new way. Concepts from the study of business-to-business relationships were brought over and their power in explaining the nature of consumer relationships examined. The study provided a comprehensive picture of loyalty programs, which is an important contribution to the academic as well as the managerial discussions. The consumer study provided deep insights into the nature of customer relationships. The study provides a new frame of reference to support the existing concepts of loyalty and commitment with the introduction of the relationship driver and relational motivation concepts. The result is a novel view of the nature of customer relationships that creates new understanding of the forces leading to loyal behaviour and commitment. The study concludes with managerial implications.
Resumo:
Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.
Resumo:
The modes of binding of alpha- and beta-anomers of D-galactose, D-fucose and D-glucose to L-arabinose-binding protein (ABP) have been studied by energy minimization using the low resolution (2.4 A) X-ray data of the protein. These studies suggest that these sugars preferentially bind in the alpha-form to ABP, unlike L-arabinose where both alpha- and beta-anomers bind almost equally. The best modes of binding of alpha- and beta-anomers of D-galactose and D-fucose differ slightly in the nature of the possible hydrogen bonds with the protein. The residues Arg 151 and Asn 232 of ABP from bidentate hydrogen bonds with both L-arabinose and D-galactose, but not with D-fucose or D-glucose. However in the case of L-arabinose, Arg 151 forms hydrogen bonds with the hydroxyl group at the C-4 atom and the ring oxygen, whereas in case of D-galactose it forms bonds with the hydroxyl groups at the C-4 and C-6 atoms of the pyranose ring. The calculated conformational energies also predict that D-galactose is a better inhibitor than D-fucose and D-glucose, in agreement with kinetic studies. The weak inhibitor D-glucose binds preferentially to one domain of ABP leading to the formation of a weaker complex. Thus these studies provide information about the most probable binding modes of these sugars and also provide a theoretical explanation for the observed differences in their binding affinities.