969 resultados para Monarchical Schemes
Resumo:
In adults with non-promyelocytic acute myeloid leukemia (AML), high-dose cytarabine consolidation therapy has been shown to influence survival in selected patients, although the appropriate doses and schemes have not been defined. We evaluated survival after calculating the actual dose of cytarabine that patients received for consolidation therapy and divided them into 3 groups according to dose. We conducted a single-center, retrospective study involving 311 non-promyelocytic AML patients with a median age of 36 years (16-79 years) who received curative treatment between 1978 and 2007. The 131 patients who received cytarabine consolidation were assigned to study groups by their cytarabine dose protocol. Group 1 (n=69) received <1.5 g/m2 every 12 h on 3 alternate days for up to 4 cycles. The remaining patients received high-dose cytarabine (≥1.5 g/m2 every 12 h on 3 alternate days for up to 4 cycles). The actual dose received during the entire consolidation period in these patients was calculated, allowing us to divide these patients into 2 additional groups. Group 2 (n=27) received an intermediate-high-dose (<27 g/m2), and group 3 (n=35) received a very-high-dose (≥27 g/m2). Among the 311 patients receiving curative treatment, the 5-year survival rate was 20.2% (63 patients). The cytarabine consolidation dose was an independent determinant of survival in multivariate analysis; age, karyotype, induction protocol, French-American-British classification, and de novo leukemia were not. Comparisons showed that the risk of death was higher in the intermediate-high-dose group 2 (hazard ratio [HR]=4.51; 95% confidence interval [CI]: 1.81-11.21) and the low-dose group 1 (HR=4.43; 95% CI: 1.97-9.96) than in the very-high-dose group 3, with no significant difference between those two groups. Our findings indicated that very-high-dose cytarabine during consolidation in adults with non-promyelocytic AML may improve survival.
Resumo:
Laser cutting implementation possibilities into paper making machine was studied as the main objective of the work. Laser cutting technology application was considered as a replacement tool for conventional cutting methods used in paper making machines for longitudinal cutting such as edge trimming at different paper making process and tambour roll slitting. Laser cutting of paper was tested in 70’s for the first time. Since then, laser cutting and processing has been applied for paper materials with different level of success in industry. Laser cutting can be employed for longitudinal cutting of paper web in machine direction. The most common conventional cutting methods include water jet cutting and rotating slitting blades applied in paper making machines. Cutting with CO2 laser fulfils basic requirements for cutting quality, applicability to material and cutting speeds in all locations where longitudinal cutting is needed. Literature review provided description of advantages, disadvantages and challenges of laser technology when it was applied for cutting of paper material with particular attention to cutting of moving paper web. Based on studied laser cutting capabilities and problem definition of conventional cutting technologies, preliminary selection of the most promising application area was carried out. Laser cutting (trimming) of paper web edges in wet end was estimated to be the most promising area where it can be implemented. This assumption was made on the basis of rate of web breaks occurrence. It was found that up to 64 % of total number of web breaks occurred in wet end, particularly in location of so called open draws where paper web was transferred unsupported by wire or felt. Distribution of web breaks in machine cross direction revealed that defects of paper web edge was the main reason of tearing initiation and consequent web break. The assumption was made that laser cutting was capable of improvement of laser cut edge tensile strength due to high cutting quality and sealing effect of the edge after laser cutting. Studies of laser ablation of cellulose supported this claim. Linear energy needed for cutting was calculated with regard to paper web properties in intended laser cutting location. Calculated linear cutting energy was verified with series of laser cutting. Practically obtained laser energy needed for cutting deviated from calculated values. This could be explained by difference in heat transfer via radiation in laser cutting and different absorption characteristics of dry and moist paper material. Laser cut samples (both dry and moist (dry matter content about 25-40%)) were tested for strength properties. It was shown that tensile strength and strain break of laser cut samples are similar to corresponding values of non-laser cut samples. Chosen method, however, did not address tensile strength of laser cut edge in particular. Thus, the assumption of improving strength properties with laser cutting was not fully proved. Laser cutting effect on possible pollution of mill broke (recycling of trimmed edge) was carried out. Laser cut samples (both dry and moist) were tested on the content of dirt particles. The tests revealed that accumulation of dust particles on the surface of moist samples can take place. This has to be taken into account to prevent contamination of pulp suspension when trim waste is recycled. Material loss due to evaporation during laser cutting and amount of solid residues after cutting were evaluated. Edge trimming with laser would result in 0.25 kg/h of solid residues and 2.5 kg/h of lost material due to evaporation. Schemes of laser cutting implementation and needed laser equipment were discussed. Generally, laser cutting system would require two laser sources (one laser source for each cutting zone), set of beam transfer and focusing optics and cutting heads. In order to increase reliability of system, it was suggested that each laser source would have double capacity. That would allow to perform cutting employing one laser source working at full capacity for both cutting zones. Laser technology is in required level at the moment and do not require additional development. Moreover, capacity of speed increase is high due to availability high power laser sources what can support the tendency of speed increase of paper making machines. Laser cutting system would require special roll to maintain cutting. The scheme of such roll was proposed as well as roll integration into paper making machine. Laser cutting can be done in location of central roll in press section, before so-called open draw where many web breaks occur, where it has potential to improve runability of a paper making machine. Economic performance of laser cutting was done as comparison of laser cutting system and water jet cutting working in the same conditions. It was revealed that laser cutting would still be about two times more expensive compared to water jet cutting. This is mainly due to high investment cost of laser equipment and poor energy efficiency of CO2 lasers. Another factor is that laser cutting causes material loss due to evaporation whereas water jet cutting almost does not cause material loss. Despite difficulties of laser cutting implementation in paper making machine, its implementation can be beneficial. The crucial role in that is possibility to improve cut edge strength properties and consequently reduce number of web breaks. Capacity of laser cutting to maintain cutting speeds which exceed current speeds of paper making machines what is another argument to consider laser cutting technology in design of new high speed paper making machines.
Resumo:
The application of VSC-HVDC technology throughout the world has turned out to be an efficient solution regarding a large share of wind power in different power systems. This technology enhances the overall reliability of the grid by utilization of the active and reactive power control schemes which allows to maintain frequency and voltage on busbars of the end-consumers at the required level stated by the network operator. This master’s thesis is focused on the existing and planned wind farms as well as electric power system of the Åland Islands. The goal is to analyze the wind conditions of the islands and appropriately predict a possible production of the existing and planned wind farms with a help of WAsP software program. Further, to investigate the influence of increased wind power it is necessary to develop a simulation model of the electric grid and VSC-HVDC system in PSCAD and examine grid response to different wind power production cases with respect to the grid code requirements and ensure the stability of the power system.
Resumo:
This article is devoted to analyze changes in economic policy to be adopted by Mexico if a national development project were implemented. Starting from an evaluation of the main economic and political outcomes of Vicentes Fox administration, the author proposes an alternative development strategy which permits Mexico to overcome economic stagnation. That strategy would be based in recovering the internal market as the dynamical focus of the economy with the purpose of satisfying basic needs of people. To be successful this strategy should to confront the "critical knots" of the Neo-liberal model: to reverse the uneven distribution of income; abandoning the fixing of restrictive monetary, fiscal and exchange rate policies; and mobilizing economic surplus by means of a profound revision of debt service schemes. It concludes that to implement a national development project it is required a political and economic strategy to dismantle neoliberalism, which is an antinational structure of power.
Resumo:
This paper analyses the most important experiences with school accountability policy (SA) in Brazil. The analysis suggests that their impacts on the quality of education are not significant due to the fact that: (i) either it does not incorporate a system in which the school is responsible for the students performance; (ii) or the incentive schemes are not appropriately designed. Finally, it discusses the main barriers to the adoption of an efficient SA at the national level in Brazil.
Resumo:
An alternative to the education policy in Brasil: School accountability. This paper examines the school accountability (SA) policies adopted in the US. Significant impacts on the quality of education occur when the SA incorporates a set of sanctions and rewards to schools based on their students' performance. In comparison with other policies, it is also more efficient. Potential problems of adopting the SA (bias toward cognitive ability, gaming and difficulty in measuring the school contribution) can be overcome. The analysis suggests that the SA should be considered as an alternative to improve the quality of education in Brazil.
Resumo:
Increasingly growing share of distributed generation in the whole electrical power system’s generating system is currently a worldwide tendency, driven by several factors, encircling mainly difficulties in refinement of megalopolises’ distribution networks and its maintenance; widening environmental concerns adding to both energy efficiency approaches and installation of renewable sources based generation, inherently distributed; increased power quality and reliability needs; progress in IT field, making implementable harmonization of needs and interests of different-energy-type generators and consumers. At this stage, the volume, formed by system-interconnected distributed generation facilities, have reached the level of causing broad impact toward system operation under emergency and post-emergency conditions in several EU countries, thus previously implementable approach of their preliminary tripping in case of a fault, preventing generating equipment damage and disoperation of relay protection and automation, is not applicable any more. Adding to the preceding, withstand capability and transient electromechanical stability of generating technologies, interconnecting in proximity of load nodes, enhanced significantly since the moment Low Voltage Ride-Through regulations, followed by techniques, were introduced in Grid Codes. Both aspects leads to relay protection and auto-reclosing operation in presence of distributed generation generally connected after grid planning and construction phases. This paper proposes solutions to the emerging need to ensure correct operation of the equipment in question with least possible grid refinements, distinctively for every type of distributed generation technology achieved its technical maturity to date and network’s protection. New generating technologies are equivalented from the perspective of representation in calculation of initial steady-state short-circuit current used to dimension current-sensing relay protection, and widely adopted short-circuit calculation practices, as IEC 60909 and VDE 0102. The phenomenon of unintentional islanding, influencing auto-reclosing, is addressed, and protection schemes used to eliminate an sustained island are listed and characterized by reliability and implementation related factors, whereas also forming a crucial aspect of realization of the proposed protection operation relieving measures.
Resumo:
If emerging markets are to achieve their objective of joining the ranks of industrialized, developed countries, they must use their economic and political influence to support radical change in the international financial system. This working paper recommends John Maynard Keynes's "clearing union" as a blueprint for reform of the international financial architecture that could address emerging market grievances more effectively than current approaches. Keynes's proposal for the postwar international system sought to remedy some of the same problems currently facing emerging market economies. It was based on the idea that financial stability was predicated on a balance between imports and exports over time, with any divergence from balance providing automatic financing of the debit countries by the creditor countries via a global clearinghouse or settlement system for trade and payments on current account. This eliminated national currency payments for imports and exports; countries received credits or debits in a notional unit of account fixed to national currency. Since the unit of account could not be traded, bought, or sold, it would not be an international reserve currency. The credits with the clearinghouse could only be used to offset debits by buying imports, and if not used for this purpose they would eventually be extinguished; hence the burden of adjustment would be shared equally - credit generated by surpluses would have to be used to buy imports from the countries with debit balances. Emerging market economies could improve upon current schemes for regionally governed financial institutions by using this proposal as a template for the creation of regional clearing unions using a notional unit of account.
Resumo:
The Zubarev equation of motion method has been applied to an anharmonic crystal of O( ,,4). All possible decoupling schemes have been interpreted in order to determine finite temperature expressions for the one phonon Green's function (and self energy) to 0()\4) for a crystal in which every atom is on a site of inversion symmetry. In order to provide a check of these results, the Helmholtz free energy expressions derived from the self energy expressions, have been shown to agree in the high temperature limit with the results obtained from the diagrammatic method. Expressions for the correlation functions that are related to the mean square displacement have been derived to 0(1\4) in the high temperature limit.
Resumo:
Manpower is a basic resource. It is the indispensable means of converting other resources to mankind '.s use and benefit. As a process· of increasing the knowledge, skills, and dexterity of the people of a society, manpower development is the most fundamental means of enabling a nation to acquire the capacities to bring about its desired future state of affairs -- a more mighty and wealthier nation. Singapore's brief nation-building history justifies the emphasis accorded to the importance of good quality human resources and manpower development in economic and socio-political developments. As a tiny island-state with a poor natural resource base, Singapore's long-term survival and development depend ultimately upon the quality and the creative energy of her people. In line with the nation-building goals and strategies of the Republic, as conditioned by her objective setting, Singapore's basic manpower development premise has been one of "quality and not quantity". While implementing the "stop-at-two" family planning and population control programs and the relevant immigration measures to guard against the prospect of a "population explosion", the Government has energetically fostered various educational programs, including vocational training schemes, adult education programs, the youth movement, and the national service scheme to improve the quality of Singaporeans. There is no denying that some of the manpower development measures taken by the Government have imposed sacrifice and hardship on the Singapore citizens. Nevertheless, they are the basic conditions for the island-Republic's long-term survival and development. It is essential iii to note that Singapore's continuing existence and phenomenal-success are largely attributable to the will, capacities and efforts of her leaders and people. In the final analysis, the wealth and the strength of a nation are based upon its ability to conserve, develop and utilize effectively the innate capacities of its people. This is true not only of Singapore but necessarily of other developing nations. It can be safely presumed that since most developing states' concerns about the quality of their human resources and the progress of their nation-building work are inextricably bound to those about the quantity of their population, the "quality and not quantity" motto of Singapore's manpower development programs can also be their guiding principle.
Resumo:
We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.
Resumo:
This thesis deals with the nature of ignorance as it was interpreted in the Upani~adic tradition, specifically in Advaita Vedanta, and in early and Mahayana Buddhism , e specially in the Madhyamika school of Buddhism. The approach i s a historical and comparative one. It examines the early thoughts of both the upanis.a ds and Buddhism abou t avidya (ignorance), shows how the notion was treated by the more speculative and philosphically oriented schools which base d themselves on the e arly works, and sees how their views differ. The thesis will show that the Vedinta tended to treat avidya as a topic for metaphysical s peculation as t he s chool developed, drifting from its initial e xistential concerns, while the Madhyamika remained in contact with the e xistential concerns evident in the first discourses of the Buddha. The word "notion" has been chosen for use in referring t o avidya, even though it may have non-intellectual and emotional connotations, to avoid more popular a lternatives such as "concept" or "idea". In neither the Upani,ads, Advaita Vedanta, or Buddhism is ignorance merely a concept or an idea. Only in a secondary sense, in texts and speech , does it become one. Avidya has more to do with the lived situation in which man finds himself, with the subjectobject separation in which he f eels he exists, than with i i i intel lect ual constr ucts . Western thought has begun to r ealize the same with concerns such as being in modern ontology, and has chosen to speak about i t i n terms of the question of being . Avidya, however, i s not a 'question' . If q ue stions we r e to be put regarding the nature of a vidya , they would be more of t he sort "What is not avidya?", though e ven here l anguage bestows a status t o i t which avidya does not have. In considering a work of the Eastern tradition, we f ace t he danger of imposing Western concepts on it. Granted t hat avidya is customari ly r endered i n English as ignorance, the ways i n which the East and West view i gno rance di f f er. Pedagogically , the European cultures, grounded in the ancient Greek culture, view ignorance as a l ack or an emptiness. A child is i gnorant o f certain t hings and the purpose o f f ormal education , in f act if not in theory, is to fill him with enough knowledge so that he can cope wit h t he complexities and the e xpectations of s ociety. On another level, we feel t hat study and research will l ead t o the discovery o f solutions, which we now lack , for problems now defying solut i on . The East, on the o t her hand, sees avidya in a d i fferent light.Ignorance isn't a lack, but a presence. Religious and philosophical l iterature directs its efforts not towards acquiring something new, but at removing t.he ideas and opinions that individuals have formed about themselves and the world. When that is fully accomplished, say the sages , t hen Wisdom, which has been obscured by those opinions, will present itself. Nothing new has to be learned, t hough we do have t o 'learn' that much. The growing interest in t he West with Eastern religions and philosophies may, in time, influence our theoretical and practical approaches to education and learning, not only in the established educati onal institutions, but in religious , p sychological, and spiritual activities as well. However, the requirements o f this thesis do no t permit a formulation of revolutionary method or a call to action. It focuses instead on the textual arguments which attempt to convince readers that t he world in which they take themselves to exist is not, in essence, real, on the ways i n which the l imitations of language are disclosed, and on the provisional and limited schemes that are built up to help students see through their ignorance. The metaphysic s are provisional because they act only as spurs and guides. Both the Upanisadic and Buddhist traditions that will be dealt with here stress that language constantly fails to encompass the Real. So even terms s uch as 'the Real', 'Absolute', etc., serve only to lead to a transcendent experience . The sections dealing with the Upanisads and Advaita Vedanta show some of the historical evolution of the notion of avidya, how it was dealt with as maya , and the q uestions that arose as t o its locus. With Gau?apada we see the beginnings of a more abstract treatment of the topic, and , the influence of Buddhism. Though Sankhara' S interest was primarily directed towards constructing a philosophy to help others attain mok~a ( l iberation), he too introduced t echnica l t e rminology not found in the works of his predecessors. His work is impressive , but areas of it are incomplete. Numbers of his followers tried to complete the systematic presentation of his insi ghts . Their work focuses on expl anat i ons of adhyasa (superimposition ) , t he locus and object of ignorance , and the means by which Brahman takes itself to be the jiva and the world. The section on early Buddhism examines avidya in the context o f the four truths, together with dubkha (suffering), the r ole it p l ays in t he chain of dependent c ausation , a nd t he p r oblems that arise with t he doctrine of anatman. With t he doct rines of e arly Buddhism as a base, the Madhyamika elaborated questions that the Buddha had said t e nded not t o edi f ication. One of these had to do with own - being or svabhava. Thi s serves a s a centr e around which a discussion o f i gnorance unfolds, both i ndividual and coll ective ignorance. There follows a treatment of the cessation of ignorance as it is discussed within this school . The final secti on tries to present t he similarities and differences i n the natures o f ignorance i n t he two traditions and discusses the factors responsible for t hem . ACKNOWLEDGEMENTS I would like to thank Dr. Sinha for the time spent II and suggestions made on the section dealing with Sankara and the Advait.a Vedanta oommentators, and Dr. Sprung, who supervised, direoted, corrected and encouraged the thesis as a whole, but especially the section on Madhyamika, and the final comparison.
Resumo:
Two groups of rainbow trout were acclimated to 20 , 100 , and 18 o C. Plasma sodium, potassium, and chloride levels were determined for both. One group was employed in the estimation of branchial and renal (Na+-K+)-stimulated, (HC0 3-)-stimulated, and CMg++)-dependent ATPase activities, while the other was used in the measurement of carbonic anhydrase activity in the blood, gill and kidney. Assays were conducted using two incubation temperature schemes. One provided for incubation of all preparations at a common temperature of 2S oC, a value equivalent to the upper incipient lethal level for this species. In the other procedure the preparations were incubated at the appropriate acclimation temperature of the sampled fish. Trout were able to maintain plasma sodium and chloride levels essentially constant over the temperature range employed. The different incubation temperature protocols produced different levels of activity, and, in some cases, contrary trends with respect to acclimation temperature. This information was discussed in relation to previous work on gill and kidney. The standing-gradient flow hypothesis was discussed with reference to the structure of the chloride cell, known thermallyinduced changes in ion uptake, and the enzyme activities obtained in this study. Modifications of the model of gill lon uptake suggested by Maetz (1971) were proposed; high and low temperature models resulting. In short, ion transport at the gill at low temperatures appears to involve sodium and chloride 2 uptake by heteroionic exchange mechanisms working in association w.lth ca.rbonlc anhydrase. G.l ll ( Na + -K + ) -ATPase and erythrocyte carbonic anhydrase seem to provide the supplemental uptake required at higher temperatures. It appears that the kidney is prominent in ion transport at low temperatures while the gill is more important at high temperatures. 3 Linear regression analyses involving weight, plasma ion levels, and enzyme activities indicated several trends, the most significant being the interrelationship observed between plasma sodium and chloride. This, and other data obtained in the study was considered in light of the theory that a link exists between plasma sodium and chloride regulatory mechanisms.
Resumo:
Understanding the machinery of gene regulation to control gene expression has been one of the main focuses of bioinformaticians for years. We use a multi-objective genetic algorithm to evolve a specialized version of side effect machines for degenerate motif discovery. We compare some suggested objectives for the motifs they find, test different multi-objective scoring schemes and probabilistic models for the background sequence models and report our results on a synthetic dataset and some biological benchmarking suites. We conclude with a comparison of our algorithm with some widely used motif discovery algorithms in the literature and suggest future directions for research in this area.
Resumo:
Les employés d’un organisme utilisent souvent un schéma de classification personnel pour organiser les documents électroniques qui sont sous leur contrôle direct, ce qui suggère la difficulté pour d’autres employés de repérer ces documents et la perte possible de documentation pour l’organisme. Aucune étude empirique n’a été menée à ce jour afin de vérifier dans quelle mesure les schémas de classification personnels permettent, ou même facilitent, le repérage des documents électroniques par des tiers, dans le cadre d’un travail collaboratif par exemple, ou lorsqu’il s’agit de reconstituer un dossier. Le premier objectif de notre recherche était de décrire les caractéristiques de schémas de classification personnels utilisés pour organiser et classer des documents administratifs électroniques. Le deuxième objectif consistait à vérifier, dans un environnement contrôlé, les différences sur le plan de l’efficacité du repérage de documents électroniques qui sont fonction du schéma de classification utilisé. Nous voulions vérifier s’il était possible de repérer un document avec la même efficacité, quel que soit le schéma de classification utilisé pour ce faire. Une collecte de données en deux étapes fut réalisée pour atteindre ces objectifs. Nous avons d’abord identifié les caractéristiques structurelles, logiques et sémantiques de 21 schémas de classification utilisés par des employés de l’Université de Montréal pour organiser et classer les documents électroniques qui sont sous leur contrôle direct. Par la suite, nous avons comparé, à partir d'une expérimentation contrôlée, la capacité d’un groupe de 70 répondants à repérer des documents électroniques à l’aide de cinq schémas de classification ayant des caractéristiques structurelles, logiques et sémantiques variées. Trois variables ont été utilisées pour mesurer l’efficacité du repérage : la proportion de documents repérés, le temps moyen requis (en secondes) pour repérer les documents et la proportion de documents repérés dès le premier essai. Les résultats révèlent plusieurs caractéristiques structurelles, logiques et sémantiques communes à une majorité de schémas de classification personnels : macro-structure étendue, structure peu profonde, complexe et déséquilibrée, regroupement par thème, ordre alphabétique des classes, etc. Les résultats des tests d’analyse de la variance révèlent des différences significatives sur le plan de l’efficacité du repérage de documents électroniques qui sont fonction des caractéristiques structurelles, logiques et sémantiques du schéma de classification utilisé. Un schéma de classification caractérisé par une macro-structure peu étendue et une logique basée partiellement sur une division par classes d’activités augmente la probabilité de repérer plus rapidement les documents. Au plan sémantique, une dénomination explicite des classes (par exemple, par utilisation de définitions ou en évitant acronymes et abréviations) augmente la probabilité de succès au repérage. Enfin, un schéma de classification caractérisé par une macro-structure peu étendue, une logique basée partiellement sur une division par classes d’activités et une sémantique qui utilise peu d’abréviations augmente la probabilité de repérer les documents dès le premier essai.