924 resultados para Reasonable Lenght of Process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Antarctic Pack Ice Seal (APIS) Program was initiated in 1994 to estimate the abundance of four species of Antarctic phocids: the crabeater seal Lobodon carcinophaga, Weddell seal Leptonychotes weddellii, Ross seal Ommatophoca rossii and leopard seal Hydrurga leptonyx and to identify ecological relationships and habitat use patterns. The Atlantic sector of the Southern Ocean (the eastern sector of the Weddell Sea) was surveyed by research teams from Germany, Norway and South Africa using a range of aerial methods over five austral summers between 1996-1997 and 2000-2001. We used these observations to model densities of seals in the area, taking into account haul-out probabilities, survey-specific sighting probabilities and covariates derived from satellite-based ice concentrations and bathymetry. These models predicted the total abundance over the area bounded by the surveys (30°W and 10°E). In this sector of the coast, we estimated seal abundances of: 514 (95 % CI 337-886) x 10**3 crabeater seals, 60.0 (43.2-94.4) x 10**3 Weddell seals and 13.2 (5.50-39.7) x 10**3 leopard seals. The crabeater seal densities, approximately 14,000 seals per degree longitude, are similar to estimates obtained by surveys in the Pacific and Indian sectors by other APIS researchers. Very few Ross seals were observed (24 total), leading to a conservative estimate of 830 (119-2894) individuals over the study area. These results provide an important baseline against which to compare future changes in seal distribution and abundance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are interested in the emergence of new markets. While the literature contains various perspectives on how such new markets come to be, the dynamics of the marketization process are less clear. This paper focuses on the development of stent technology and examines the activities characteristic of its emerging market. We identify four market ‘moments’: a mutable marketing moment prior to the point of disruption; two parallel moments at the point of disruption – internecine marketing between emergent competitors, and subversive marketing between those competitors and established actors; and finally, a civilized marketing moment. We conclude that emergent competitors operate two distinct strategies at the point of disruption. Also, legal activities are central to marketization dynamics during this period. In terms of process, while creative destruction may broadly describe the move from disruption to acceptance, there is a period of creative construction prior to disruption, when the new market is coming into being.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural ice is formed by freezing of water or by sintering of dry or wet snow. Each of these processes causes atmospheric air to be enclosed in ice as bubbles. The air amount and composition as well as the bubble sizes and density depend not only on the kind of process but also on several environmental conditions. The ice in the deepest layers of the Greenland and thc Antarctic ice sheet was formed more than 100 000 years ago. In the bubbles of this ice, samples of atmospheric air from that time are preserved. The enclosure of air is discussed for each of the three processes. Of special interest are the parameters which control the amount and composition of the enclosed air. If the ice is formed by sintering of very cold dry snow, the air composition in the bubbles corresponds with good accuracy to the composition of atmospheric air.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pipelines are one of the safest means to transport crude oil, but are not spill-free. This is of concern in North America, due to the large volumes of crude oil shipped by Canadian producers and the lengthy network of pipelines. Each pipeline crosses many rivers, supporting a wide variety of human activities, and rich aquatic life. However, there is a knowledge gap on the risks of contamination of river beds due to oil spills. This thesis addresses this knowledge gap by focussing on mechanisms that transport water (and contaminants) from the free surface flow to the bed sediments, and vice-versa. The work focuses on gravel rivers, in which bed sediments are sufficiently permeable that pressure gradients caused by the interactions of flow with topographic elements (gravel bars), or changes in direction induce exchanges of water between the free surface flow and the bed, known as hyporheic flows. The objectives of the thesis are: to present a new method to visualize and quantify hyporheic flows in laboratory experiments; to conduct a novel series of experiments on hyporheic flow induced by a gravel bar under different free surface flows. The new method to quantify hyporheic flows rests on injections of a solution of dye and water. The method yielded accurate flow lines, and reasonable estimates of the hyporheic flow velocities. The present series of experiments was carried out in a 11 m long, 0.39 m wide, and 0.41 m deep tilting flume. The gravel had a mean particle size of 7.7 mm. Different free surface flows were imposed by changing the flume slope and flow depth. Measured hyporheic flows were turbulent. Smaller free surface flow depths resulted in stronger hyporheic flows (higher velocities, and deeper dye penetration into the sediment). A significant finding is that different free surface flows (different velocities, Reynolds number, etc.) produce similar hyporheic flows as long as the downstream hydraulic gradients are similar. This suggests, that for a specified bar geometry, the characteristics of the hyporheic flows depend on the downstream hydraulic gradients, and not or only minimally on the internal dynamics of the free surface flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bridges are a critical part of North America’s transportation network that need to be assessed frequently to inform bridge management decision making. Visual inspections are usually implemented for this purpose, during which inspectors must observe and report any excess displacements or vibrations. Unfortunately, these visual inspections are subjective and often highly variable and so a monitoring technology that can provide quantitative measurements to supplement inspections is needed. Digital Image Correlation (DIC) is a novel monitoring technology that uses digital images to measure displacement fields without any contact with the bridge. In this research, DIC and accelerometers were used to investigate the dynamic response of a railway bridge reported to experience large lateral displacements. Displacements were estimated using accelerometer measurements and were compared to DIC measurements. It was shown that accelerometers can provide reasonable estimates of displacement for zero-mean lateral displacements. By comparing measurements in the girder and in the piers, it was shown that for the bridge monitored, the large lateral displacements originated in the steel casting bearings positioned above the piers, and not in the piers themselves. The use of DIC for evaluating the effectiveness of rehabilitation of the LaSalle Causeway lift bridge in Kingston, Ontario was also investigated. Vertical displacements were measured at midspan and at the lifting end of the bridge during a static test and under dynamic live loading. The bridge displacements were well within the operating limits, however a gap at the lifting end of the bridge was identified. Rehabilitation of the bridge was conducted and by comparing measurements before and after rehabilitation, it was shown that the gap was successfully closed. Finally, DIC was used to monitor the midspan vertical and lateral displacements in a monitoring campaign of five steel rail bridges. DIC was also used to evaluate the effectiveness of structural rehabilitation of the lateral bracing of a bridge. Simple finite element models are developed using DIC measurements of displacement. Several lessons learned throughout this monitoring campaign are discussed in the hope of aiding future researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantile regression (QR) was first introduced by Roger Koenker and Gilbert Bassett in 1978. It is robust to outliers which affect least squares estimator on a large scale in linear regression. Instead of modeling mean of the response, QR provides an alternative way to model the relationship between quantiles of the response and covariates. Therefore, QR can be widely used to solve problems in econometrics, environmental sciences and health sciences. Sample size is an important factor in the planning stage of experimental design and observational studies. In ordinary linear regression, sample size may be determined based on either precision analysis or power analysis with closed form formulas. There are also methods that calculate sample size based on precision analysis for QR like C.Jennen-Steinmetz and S.Wellek (2005). A method to estimate sample size for QR based on power analysis was proposed by Shao and Wang (2009). In this paper, a new method is proposed to calculate sample size based on power analysis under hypothesis test of covariate effects. Even though error distribution assumption is not necessary for QR analysis itself, researchers have to make assumptions of error distribution and covariate structure in the planning stage of a study to obtain a reasonable estimate of sample size. In this project, both parametric and nonparametric methods are provided to estimate error distribution. Since the method proposed can be implemented in R, user is able to choose either parametric distribution or nonparametric kernel density estimation for error distribution. User also needs to specify the covariate structure and effect size to carry out sample size and power calculation. The performance of the method proposed is further evaluated using numerical simulation. The results suggest that the sample sizes obtained from our method provide empirical powers that are closed to the nominal power level, for example, 80%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ordinary principles of the law of negligence are applicable in the context of sport, including claims brought against volunteer and professional coaches. Adopting the perspective of the coach, this article intends to raise awareness of the emerging intersection between the law of negligence and sports coaching, by utilising an interdisciplinary analysis designed to better safeguard and reassure coaches mindful of legal liability. Detailed scrutiny of two cases concerning alleged negligent coaching, with complementary discussion of some of the ethical dilemmas facing modern coaches, reinforces the legal duty and obligation of all coaches to adopt objectively reasonable and justifiable coaching practices when interacting with athletes. Problematically, since research suggests that some coaching practice may be underpinned by “entrenched legitimacy” and “uncritical inertia”, it is argued that coach education and training should place a greater emphasis on developing a coach’s awareness and understanding of the evolving legal context in which they discharge the duty of care incumbent upon them.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Innovation is a strategic necessity for the survival of today’s organizations. The wide recognition of innovation as a competitive necessity, particularly in dynamic market environments, makes it an evergreen domain for research. This dissertation deals with innovation in small Information Technology (IT) firms in India. The IT industry in India has been a phenomenal success story of the last three decades, and is today facing a crucial phase in its history characterized by the need for fundamental changes in strategies, driven by innovation. This study, while motivated by the dynamics of changing times, importantly addresses the research gap on small firm innovation in Indian IT.This study addresses three main objectives: (a) drivers of innovation in small IT firms in India (b) impact of innovation on firm performance (c) variation in the extent of innovation adoption in small firms. Product and process innovation were identified as the two most contextually relevant types of innovation for small IT firms. The antecedents of innovation were identified as Intellectual Capital, Creative Capability, Top Management Support, Organization Learning Capability, Customer Involvement, External Networking and Employee Involvement.Survey method was adopted for data collection and the study unit was the firm. Surveys were conducted in 2014 across five South Indian cities. Small firm was defined as one with 10-499 employees. Responses from 205 firms were chosen for analysis. Rigorous statistical analysis was done to generate meaningful insights. The set of drivers of product innovation (Intellectual Capital, Creative Capability, Top Management Support, Customer Involvement, External Networking, and Employee Involvement)were different from that of process innovation (Creative Capability, Organization Learning Capability, External Networking, and Employee Involvement). Both product and process innovation had strong impact on firm performance. It was found that firms that adopted a combination of product innovation and process innovation had the highest levels of firm performance. Product innovation and process innovation fully mediated the relationship between all the seven antecedents and firm performance The results of this study have several important theoretical and practical implications. To the best of the researcher’s knowledge, this is the first time that an empirical study of firm level innovation of this kind has been undertaken in India. A measurement model for product and process innovation was developed, and the drivers of innovation were established statistically. Customer Involvement, External Networking and Employee Involvement are elements of Open Innovation, and all three had strong association with product innovation, and the latter twohad strong association with process innovation. The results showed that proclivity for Open Innovation is healthy in the Indian context. Practical implications have been outlined along how firms can organize themselves for innovation, the human talent for innovation, the right culture for innovation and for open innovation. While some specific examples of possible future studies have been recommended, the researcher believes that the study provides numerous opportunities to further this line of enquiry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Fähigkeit, geschriebene Texte zu verstehen, d.h. eine kohärente mentale Repräsentation von Textinhalten zu erstellen, ist eine notwendige Voraussetzung für eine erfolgreiche schulische und außerschulische Entwicklung. Es ist daher ein zentrales Anliegen des Bildungssystems Leseschwierigkeiten frühzeitig zu diagnostizieren und mithilfe zielgerichteter Interventionsprogramme zu fördern. Dies erfordert ein umfassendes Wissen über die kognitiven Teilprozesse, die dem Leseverstehen zugrunde liegen, ihre Zusammenhänge und ihre Entwicklung. Die vorliegende Dissertation soll zu einem umfassenden Verständnis über das Leseverstehen beitragen, indem sie eine Auswahl offener Fragestellungen experimentell untersucht. Studie 1 untersucht inwieweit phonologische Rekodier- und orthographische Dekodierfertigkeiten zum Satz- und Textverstehen beitragen und wie sich beide Fertigkeiten bei deutschen Grundschüler(inne)n von der 2. bis zur 4. Klasse entwickeln. Die Ergebnisse legen nahe, dass beide Fertigkeiten signifikante und eigenständige Beiträge zum Leseverstehen leisten und dass sich ihr relativer Beitrag über die Klassenstufen hinweg nicht verändert. Darüber hinaus zeigt sich, dass bereits deutsche Zweitklässler(innen) den Großteil geschriebener Wörter in altersgerechten Texten über orthographische Vergleichsprozesse erkennen. Nichtsdestotrotz nutzen deutsche Grundschulkinder offenbar kontinuierlich phonologische Informationen, um die visuelle Worterkennung zu optimieren. Studie 2 erweitert die bisherige empirische Forschung zu einem der bekanntesten Modelle des Leseverstehens—der Simple View of Reading (SVR, Gough & Tunmer, 1986). Die Studie überprüft die SVR (Reading comprehension = Decoding x Comprehension) mithilfe optimierter und methodisch stringenter Maße der Modellkonstituenten und überprüft ihre Generalisierbarkeit für deutsche Dritt- und Viertklässler(innen). Studie 2 zeigt, dass die SVR einer methodisch stringenten Überprüfung nicht standhält und nicht ohne Weiteres auf deutsche Dritt- und Viertklässler(innen) generalisiert werden kann. Es wurden nur schwache Belege für eine multiplikative Verknüpfung von Dekodier- (D) und Hörverstehensfertigkeiten (C) gefunden. Der Umstand, dass ein beachtlicher Teil der Varianz im Leseverstehen (R) nicht durch D und C aufgeklärt werden konnte, deutet darauf hin, dass das Modell nicht vollständig ist und ggf. durch weitere Komponenten ergänzt werden muss. Studie 3 untersucht die Verarbeitung positiv-kausaler und negativ-kausaler Kohärenzrelationen bei deutschen Erst- bis Viertklässler(inne)n und Erwachsenen im Lese- und Hörverstehen. In Übereinstimmung mit dem Cumulative Cognitive Complexity-Ansatz (Evers-Vermeul & Sanders, 2009; Spooren & Sanders, 2008) zeigt Studie 3, dass die Verarbeitung negativ-kausaler Kohärenzrelationen und Konnektoren kognitiv aufwändiger ist als die Verarbeitung positiv-kausaler Relationen. Darüber hinaus entwickelt sich das Verstehen beider Kohärenzrelationen noch über die Grundschulzeit hinweg und ist für negativ-kausale Relationen am Ende der vierten Klasse noch nicht abgeschlossen. Studie 4 zeigt und diskutiert die Nützlichkeit prozess-orientierter Lesetests wie ProDi- L (Richter et al., in press), die individuelle Unterschiede in den kognitiven Teilfertigkeiten des Leseverstehens selektiv erfassen. Hierzu wird exemplarisch die Konstruktvalidität des ProDi-L-Subtests ‚Syntaktische Integration’ nachgewiesen. Mittels explanatorischer Item- Repsonse-Modelle wird gezeigt, dass der Test Fertigkeiten syntaktischer Integration separat erfasst und Kinder mit defizitären syntaktischen Fertigkeiten identifizieren kann. Die berichteten Befunde tragen zu einem umfassenden Verständnis der kognitiven Teilfertigkeiten des Leseverstehens bei, das für eine optimale Gestaltung des Leseunterrichts, für das Erstellen von Lernmaterialien, Leseinstruktionen und Lehrbüchern unerlässlich ist. Darüber hinaus stellt es die Grundlage für eine sinnvolle Diagnose individueller Leseschwierigkeiten und für die Konzeption adaptiver und zielgerichteter Interventionsprogramme zur Förderung des Leseverstehens bei schwachen Leser(inne)n dar.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Segurança e Higiene no Trabalho

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to analyse differences in the drivers of firm innovation performance across sectors. The literature often makes the assumption that firms in different sectors differ in their propensity to innovate but not in the drivers of innovation. The authors empirically assess whether this assumption is accurate through a series of econometric estimations and tests. Design/methodology/approach: The data used are derived from the Irish Community Innovation Survey 2004-2006. A series of multivariate probit models are estimated and the resulting coefficients are tested for parameter stability across sectors using likelihood ratio tests. Findings: The results indicate that there is a strong degree of heterogeneity in the drivers of innovation across sectors. The determinants of process, organisational, new to firm and new to market innovation varies across sectors suggesting that the pooling of sectors in an innovation production function may lead to biased inferences. Research limitations/implications: The implications of the results are that innovation policies targeted at stimulating innovation need to be tailored to particular industries. One size fits all policies would seem inappropriate given the large degree of heterogeneity observed across the drivers of innovation in different sectors. Originality/value: The value of this paper is that it provides an empirical test as to whether it is suitable to group sectoral data when estimating innovation production functions. Most papers simply include sectoral dummies, implying that only the propensity to innovate differs across sectors and that the slope of the coefficient estimates are in fact consistent across sectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper studies the use of play as a method to unlock creativity and innovation within a community of practice (a group of individuals who share a common interest and who see value in interaction to enhance their understanding). An analysis of communities of practice and the value of play informs evaluation of two case studies exploring the development of communities of practice, one within the discipline of videogames and one which bridges performing arts and videogames. The case studies provide qualitative data from which the potential of play as a method to inspire creativity and support the development of a potential community of practice is recognised. Establishing trust, disruption of process through play and reflection are key steps proposed in a ‘context provider’s framework’ for individuals or organisations to utilise in the design of activities to support creative process and innovation within a potential community of practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The erosion processes resulting from flow of fluids (gas-solid or liquid-solid) are encountered in nature and many industrial processes. The common feature of these erosion processes is the interaction of the fluid (particle) with its boundary thus resulting in the loss of material from the surface. This type of erosion in detrimental to the equipment used in pneumatic conveying systems. The puncture of pneumatic conveyor bends in industry causes several problems. Some of which are: (1) Escape of the conveyed product causing health and dust hazard; (2) Repairing and cleaning up after punctures necessitates shutting down conveyors, which will affect the operation of the plant, thus reducing profitability. The most common occurrence of process failure in pneumatic conveying systems is when pipe sections at the bends wear away and puncture. The reason for this is particles of varying speed, shape, size and material properties strike the bend wall with greater intensity than in straight sections of the pipe. Currently available models for predicting the lifetime of bends are inaccurate (over predict by 80%. The provision of an accurate predictive method would lead to improvements in the structure of the planned maintenance programmes of processes, thus reducing unplanned shutdowns and ultimately the downtime costs associated with these unplanned shutdowns. This is the main motivation behind the current research. The paper reports on two aspects of the first phases of the study-undertaken for the current project. These are (1) Development and implementation; and (2) Testing of the modelling environment. The model framework encompasses Computational Fluid Dynamics (CFD) related engineering tools, based on Eulerian (gas) and Lagrangian (particle) approaches to represent the two distinct conveyed phases, to predict the lifetime of conveyor bends. The method attempts to account for the effect of erosion on the pipe wall via particle impacts, taking into account the angle of attack, impact velocity, shape/size and material properties of the wall and conveyed material, within a CFD framework. Only a handful of researchers use CFD as the basis of predicting the particle motion, see for example [1-4] . It is hoped that this would lead to more realistic predictions of the wear profile. Results, for two, three-dimensional test cases using the commercially available CFD PHOENICS are presented. These are reported in relation to the impact intensity and sensitivity to the inlet particle distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Discovery Driven Analysis (DDA) is a common feature of OLAP technology to analyze structured data. In essence, DDA helps analysts to discover anomalous data by highlighting 'unexpected' values in the OLAP cube. By giving indications to the analyst on what dimensions to explore, DDA speeds up the process of discovering anomalies and their causes. However, Discovery Driven Analysis (and OLAP in general) is only applicable on structured data, such as records in databases. We propose a system to extend DDA technology to semi-structured text documents, that is, text documents with a few structured data. Our system pipeline consists of two stages: first, the text part of each document is structured around user specified dimensions, using semi-PLSA algorithm; then, we adapt DDA to these fully structured documents, thus enabling DDA on text documents. We present some applications of this system in OLAP analysis and show how scalability issues are solved. Results show that our system can handle reasonable datasets of documents, in real time, without any need for pre-computation.