985 resultados para Usefulness


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of the relevance and the usefulness of extracted association rules is of primary importance because, in the majority of cases, real-life databases lead to several thousands association rules with high confidence and among which are many redundancies. Using the closure of the Galois connection, we define two new bases for association rules which union is a generating set for all valid association rules with support and confidence. These bases are characterized using frequent closed itemsets and their generators; they consist of the non-redundant exact and approximate association rules having minimal antecedents and maximal consequences, i.e. the most relevant association rules. Algorithms for extracting these bases are presented and results of experiments carried out on real-life databases show that the proposed bases are useful, and that their generation is not time consuming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary: Productivity, botanical composition and forage quality of legume-grass swards are important factors for successful arable farming in both organic and conventional farming systems. As these attributes can vary considerably within a field, a non-destructive method of detection while doing other tasks would facilitate a more targeted management of crops, forage and nutrients in the soil-plant-animal system. This study was undertaken to explore the potential of field spectral measurements for a non destructive prediction of dry matter (DM) yield, legume proportion in the sward, metabolizable energy (ME), ash content, crude protein (CP) and acid detergent fiber (ADF) of legume-grass mixtures. Two experiments were conducted in a greenhouse under controlled conditions which allowed collecting spectral measurements which were free from interferences such as wind, passing clouds and changing angles of solar irradiation. In a second step this initial investigation was evaluated in the field by a two year experiment with the same legume-grass swards. Several techniques for analysis of the hyperspectral data set were examined in this study: four vegetation indices (VIs): simple ratio (SR), normalized difference vegetation index (NDVI), enhanced vegetation index (EVI) and red edge position (REP), two-waveband reflectance ratios, modified partial least squares (MPLS) regression and stepwise multiple linear regression (SMLR). The results showed the potential of field spectroscopy and proved its usefulness for the prediction of DM yield, ash content and CP across a wide range of legume proportion and growth stage. In all investigations prediction accuracy of DM yield, ash content and CP could be improved by legume-specific calibrations which included mixtures and pure swards of perennial ryegrass and of the respective legume species. The comparison between the greenhouse and the field experiments showed that the interaction between spectral reflectance and weather conditions as well as incidence angle of light interfered with an accurate determination of DM yield. Further research is hence needed to improve the validity of spectral measurements in the field. Furthermore, the developed models should be tested on varying sites and vegetation periods to enhance the robustness and portability of the models to other environmental conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many plant strengtheners are promoted for their supposed effects on nutrient uptake and/or resistance induction (IR). In addition, many organic fertilizers are supposed to enhance plant health and several studies have shown that tomatoes grown organically are more resistant to late blight, caused by Phytophthora infestans to tomatoes grown conventionally. Much is known about the mechanisms underlying IR. In contrast, there is no systematic knowledge about genetic variation for IR. Therefore, the following questions were addressed in the presented dissertation: (i) Is there genetic variation among tomato genotypes for inducibility of resistance to P. infestans? (ii) How do different PS compare with the chemical inducer BABA in their ability to IR? (iii) Does IR interact with the inducer used and different organic fertilizers? A varietal screening showed that contrary to the commonly held belief IR in tomatoes is genotype and isolate specific. These results indicate that it should be possible to select for inducibility of resistance in tomato breeding. However, isolate specificity also suggests that there could be pathogen adaptation. The three tested PS as well as two of the three tested organic fertilisers all induced resistance in the tomatoes. Depending on PS or BABA variety and isolate effects varied. In contrast, there were no variety and isolate specific effects of the fertilisers and no interactions with the PS and fertilisers. This suggests that the different PS should work independent of the soil substrate used. In contrast the results were markedly different when isolate mixtures were used for challenge inoculations. Plants were generally less susceptible to isolate mixtures than to single isolates. In addition, the effectiveness of the PS was greater and more similar to BABA when isolate mixtures were used. The fact that the different PS and BABA differed in their ability to induce resistance in different host genotype -pathogen isolate combinations puts the usefulness of IR as a breeding goal in question. This would result in varieties depending on specific inducers. The results with the isolate mixtures are highly relevant. On the one hand they increase the effectiveness of the resistance inducers. On the other hand, measures that increase the pathogen diversity such as the use of diversified host populations will also increase the overall resistance of the hosts. For organic tomato production the results indicate that it is possible to enhance the tomato growing system with respect to plant health management by using optimal fertilisers, plant strengtheners and any measures that increase system diversity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collective action has been used as a strategy to improve the benefits of smallholder producers of kola nuts in Cameroon. Despite demonstrated benefits, not all producers are involved in the collective action. The presented study used a modified Technology Acceptance Model (TAM) namely the Collective Action Behaviour model (CAB model) to analyse kola producers’ motivation for collective action activities. Five hypotheses are formulated and tested using data obtained from 185 farmers who are involved in kola production and marketing in theWestern highlands of Cameroon. Results which were generated using Partial Least Squares (PLS) approach for Structural Equation Modelling (SEM) showed that farmers’ intrinsic motivators and ease of use influenced their behavioural intent to join a group marketing activities. The perceived usefulness that was mainly related to the economic benefits of group activities did not influence farmers’ behavioural intent. It is therefore concluded that extension messages and promotional activities targeting collective action need to emphasise the perceived ease of use of involvement and social benefits associated with group activities in order to increase farmers’ participation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ongoing growth of the World Wide Web, catalyzed by the increasing possibility of ubiquitous access via a variety of devices, continues to strengthen its role as our prevalent information and commmunication medium. However, although tools like search engines facilitate retrieval, the task of finally making sense of Web content is still often left to human interpretation. The vision of supporting both humans and machines in such knowledge-based activities led to the development of different systems which allow to structure Web resources by metadata annotations. Interestingly, two major approaches which gained a considerable amount of attention are addressing the problem from nearly opposite directions: On the one hand, the idea of the Semantic Web suggests to formalize the knowledge within a particular domain by means of the "top-down" approach of defining ontologies. On the other hand, Social Annotation Systems as part of the so-called Web 2.0 movement implement a "bottom-up" style of categorization using arbitrary keywords. Experience as well as research in the characteristics of both systems has shown that their strengths and weaknesses seem to be inverse: While Social Annotation suffers from problems like, e. g., ambiguity or lack or precision, ontologies were especially designed to eliminate those. On the contrary, the latter suffer from a knowledge acquisition bottleneck, which is successfully overcome by the large user populations of Social Annotation Systems. Instead of being regarded as competing paradigms, the obvious potential synergies from a combination of both motivated approaches to "bridge the gap" between them. These were fostered by the evidence of emergent semantics, i. e., the self-organized evolution of implicit conceptual structures, within Social Annotation data. While several techniques to exploit the emergent patterns were proposed, a systematic analysis - especially regarding paradigms from the field of ontology learning - is still largely missing. This also includes a deeper understanding of the circumstances which affect the evolution processes. This work aims to address this gap by providing an in-depth study of methods and influencing factors to capture emergent semantics from Social Annotation Systems. We focus hereby on the acquisition of lexical semantics from the underlying networks of keywords, users and resources. Structured along different ontology learning tasks, we use a methodology of semantic grounding to characterize and evaluate the semantic relations captured by different methods. In all cases, our studies are based on datasets from several Social Annotation Systems. Specifically, we first analyze semantic relatedness among keywords, and identify measures which detect different notions of relatedness. These constitute the input of concept learning algorithms, which focus then on the discovery of synonymous and ambiguous keywords. Hereby, we assess the usefulness of various clustering techniques. As a prerequisite to induce hierarchical relationships, our next step is to study measures which quantify the level of generality of a particular keyword. We find that comparatively simple measures can approximate the generality information encoded in reference taxonomies. These insights are used to inform the final task, namely the creation of concept hierarchies. For this purpose, generality-based algorithms exhibit advantages compared to clustering approaches. In order to complement the identification of suitable methods to capture semantic structures, we analyze as a next step several factors which influence their emergence. Empirical evidence is provided that the amount of available data plays a crucial role for determining keyword meanings. From a different perspective, we examine pragmatic aspects by considering different annotation patterns among users. Based on a broad distinction between "categorizers" and "describers", we find that the latter produce more accurate results. This suggests a causal link between pragmatic and semantic aspects of keyword annotation. As a special kind of usage pattern, we then have a look at system abuse and spam. While observing a mixed picture, we suggest that an individual decision should be taken instead of disregarding spammers as a matter of principle. Finally, we discuss a set of applications which operationalize the results of our studies for enhancing both Social Annotation and semantic systems. These comprise on the one hand tools which foster the emergence of semantics, and on the one hand applications which exploit the socially induced relations to improve, e. g., searching, browsing, or user profiling facilities. In summary, the contributions of this work highlight viable methods and crucial aspects for designing enhanced knowledge-based services of a Social Semantic Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diese Dissertation stellt eine Studie da, welche sich mit den Änderungen in der Governance der Hochschulbildung in Vietnam beschäftigt. Das zentrale Ziel dieser Forschungsarbeit ist die Untersuchung der Herkunft und Änderung in der Beziehung der Mächte zwischen dem vietnamesischen Staat und den Hochschulbildungsinstituten (HI), welche hauptsächlich aus der Interaktion dieser beiden Akteure resultiert. Die Macht dieser beiden Akteure wurde im sozialen Bereich konstruiert und ist hauptsächlich durch ihre Nützlichkeit und Beiträge für die Hochschulbildung bestimmt. Diese Arbeit beschäftigt sich dabei besonders mit dem Aspekt der Lehrqualität. Diese Studie nimmt dabei die Perspektive einer allgemeinen Governance ein, um die Beziehung zwischen Staat und HI zu erforschen. Zudem verwendet sie die „Resource Dependence Theory“ (RDT), um das Verhalten der HI in Bezug auf die sich verändernde Umgebung zu untersuchen, welche durch die Politik und eine abnehmende Finanzierung charakterisiert ist. Durch eine empirische Untersuchung der Regierungspolitik sowie der internen Steuerung und den Praktiken der vier führenden Universitäten kommt die Studie zu dem Schluss, dass unter Berücksichtigung des Drucks der Schaffung von Einkommen die vietnamesischen Universitäten sowohl Strategien als auch Taktiken entwickelt haben, um Ressourcenflüsse und Legitimität zu kontrollieren. Die Entscheidungs- und Zielfindung der Komitees, die aus einer Mehrheit von Akademikern bestehen, sind dabei mächtiger als die der Manager. Daher werden bei initiativen Handlungen der Universitäten größtenteils Akademiker mit einbezogen. Gestützt auf die sich entwickelnden Muster der Ressourcenbeiträge von Akademikern und Studierenden für die Hochschulbildung prognostiziert die Studie eine aufstrebende Governance Konfiguration, bei der die Dimensionen der akademischen Selbstverwaltung und des Wettbewerbsmarktes stärker werden und die Regulation des Staates rational zunimmt. Das derzeitige institutionelle Design und administrative System des Landes, die spezifische Gewichtung und die Koordinationsmechanismen, auch als sogenanntes effektives Aufsichtssystem zwischen den drei Schlüsselakteuren - der Staat, die HI/Akademiker und die Studierenden – bezeichnet, brauchen eine lange Zeit zur Detektion und Etablierung. In der aktuellen Phase der Suche nach einem solchen System sollte die Regierung Management-Tools stärken, wie zum Beispiel die Akkreditierung, belohnende und marktbasierte Instrumente und das Treffen informations-basierter Entscheidungen. Darüber hinaus ist es notwendig die Transparenz der Politik zu erhöhen und mehr Informationen offenzulegen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Computational models are arising is which programs are constructed by specifying large networks of very simple computational devices. Although such models can potentially make use of a massive amount of concurrency, their usefulness as a programming model for the design of complex systems will ultimately be decided by the ease in which such networks can be programmed (constructed). This thesis outlines a language for specifying computational networks. The language (AFL-1) consists of a set of primitives, ad a mechanism to group these elements into higher level structures. An implementation of this language runs on the Thinking Machines Corporation, Connection machine. Two significant examples were programmed in the language, an expert system (CIS), and a planning system (AFPLAN). These systems are explained and analyzed in terms of how they compare with similar systems written in conventional languages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are numerous text documents available in electronic form. More and more are becoming available every day. Such documents represent a massive amount of information that is easily accessible. Seeking value in this huge collection requires organization; much of the work of organizing documents can be automated through text classification. The accuracy and our understanding of such systems greatly influences their usefulness. In this paper, we seek 1) to advance the understanding of commonly used text classification techniques, and 2) through that understanding, improve the tools that are available for text classification. We begin by clarifying the assumptions made in the derivation of Naive Bayes, noting basic properties and proposing ways for its extension and improvement. Next, we investigate the quality of Naive Bayes parameter estimates and their impact on classification. Our analysis leads to a theorem which gives an explanation for the improvements that can be found in multiclass classification with Naive Bayes using Error-Correcting Output Codes. We use experimental evidence on two commonly-used data sets to exhibit an application of the theorem. Finally, we show fundamental flaws in a commonly-used feature selection algorithm and develop a statistics-based framework for text feature selection. Greater understanding of Naive Bayes and the properties of text allows us to make better use of it in text classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

If we are to understand how we can build machines capable of broad purpose learning and reasoning, we must first aim to build systems that can represent, acquire, and reason about the kinds of commonsense knowledge that we humans have about the world. This endeavor suggests steps such as identifying the kinds of knowledge people commonly have about the world, constructing suitable knowledge representations, and exploring the mechanisms that people use to make judgments about the everyday world. In this work, I contribute to these goals by proposing an architecture for a system that can learn commonsense knowledge about the properties and behavior of objects in the world. The architecture described here augments previous machine learning systems in four ways: (1) it relies on a seven dimensional notion of context, built from information recently given to the system, to learn and reason about objects' properties; (2) it has multiple methods that it can use to reason about objects, so that when one method fails, it can fall back on others; (3) it illustrates the usefulness of reasoning about objects by thinking about their similarity to other, better known objects, and by inferring properties of objects from the categories that they belong to; and (4) it represents an attempt to build an autonomous learner and reasoner, that sets its own goals for learning about the world and deduces new facts by reflecting on its acquired knowledge. This thesis describes this architecture, as well as a first implementation, that can learn from sentences such as ``A blue bird flew to the tree'' and ``The small bird flew to the cage'' that birds can fly. One of the main contributions of this work lies in suggesting a further set of salient ideas about how we can build broader purpose commonsense artificial learners and reasoners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While protein microarray technology has been successful in demonstrating its usefulness for large scale high-throughput proteome profiling, performance of antibody/antigen microarrays has been only moderately productive. Immobilization of either the capture antibodies or the protein samples on solid supports has severe drawbacks. Denaturation of the immobilized proteins as well as inconsistent orientation of antibodies/ligands on the arrays can lead to erroneous results. This has prompted a number of studies to address these challenges by immobilizing proteins on biocompatible surfaces, which has met with limited success. Our strategy relates to a multiplexed, sensitive and high-throughput method for the screening quantification of intracellular signalling proteins from a complex mixture of proteins. Each signalling protein to be monitored has its capture moiety linked to a specific oligo ‘tag’. The array involves the oligonucleotide hybridization-directed localization and identification of different signalling proteins simultaneously, in a rapid and easy manner. Antibodies have been used as the capture moieties for specific identification of each signaling protein. The method involves covalently partnering each antibody/protein molecule with a unique DNA or DNA derivatives oligonucleotide tag that directs the antibody to a unique site on the microarray due to specific hybridization with a complementary tag-probe on the array. Particular surface modifications and optimal conditions allowed high signal to noise ratio which is essential to the success of this approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se revisan diferentes formas en que la influencia social puede incidir sobre los comportamientos heterosexuales de prevención de la transmisión del VIH de los jóvenes y se presentan los resultados de algunos trabajos de las autoras, así como de otros investigadores, en que se analizan dichas relaciones. Se concluye resaltando: 1) la utilidad clínica de la evaluación de las expectativas de autoeficacia para poder intervenir específicamente en aquellas áreas en que los jóvenes se perciban con menores capacidades para ser preventivos, 2) la relación observada entre el uso de preservativo autoinformado y la creencia en su aceptación por parte de los referentes sociales más cercanos y 3) la conveniencia de que los jóvenes posean suficientes habilidades de comunicación que les permitan negociar con éxito el uso del preservativo y les ayuden a compensar posibles influencias sociales en contra de su empleo

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCCION: El dolor torácico es una de las principales causas de consulta en los servicios de urgencias y cardiología, se convierte en un reto clasificar a los pacientes empleando una herramienta diagnóstica lo suficientemente sensible y especifica para establecer riesgo y pronóstico, la estrecha relación existente entre enfermedad aterosclerótica e inflamación ha dirigido su atención al papel de marcadores plasmáticos de inflamación como predictores de riesgo de eventos cardiovasculares. La Proteína C reactiva (PCR) ha sido ampliamente estudiada en pacientes con factores de riesgo cardiovascular y Eventos coronarios Agudos, pero se desconoce el comportamiento en pacientes con dolor torácico de probabilidad intermedia. OBJETIVOS: Determinar la utilidad y comportamiento de la Proteína C reactiva en pacientes con dolor torácico de probabilidad Intermedia para síndrome coronario. MATERIALES Y METODOS: Este estudio fue realizado entre junio 2008 y febrero de 2009 en una institución de referencia en cardiológica ( Fundación Cardio Infantil, Bogotá-Colombia), Se Estudiaron pacientes con EKG normal o no diagnostico y marcadores de injuria miocardica negativos. Los pacientes continuaron su estudio según las recomendaciones y guías internacionales para dolor torácico. Nosotros realizamos dos tomas de PCR, Una PCR antes de 12 horas de iniciado el dolor torácico y otra PCR después de las 18 Hrs de iniciado el dolor torácico, se realizo la deferencia entre estas dos PCR (PCR 18 hrs vs PCR basal) Con estos 3 resultados se hizo el análisis estadístico para hallar sensibilidad, especificidad, valor predictivo positivo, valor predictivo negativo, comparándolo contra las pruebas de provocación de isquemia y cateterismo. RESULTADOS: Un total de 203 pacientes fueron analizaron. Con un promedio de edad fue de 60.8 ± 11 años, Los dos géneros tuvieron una distribución sin diferencia significativas. Los factores de riesgo asociados fueron: Hipertensión arterial 76%(n=155), Dislipidemia 68.1%(n=139), Diabetes Mellitus 20.6%(n=42), Obesidad 7.4%(n=15) y tabaquismo 9.3%(n=19). El total de cateterismos realizados fueron 66 pruebas: Normal el 27%(n=18), lesiones no significativas el 25.8%(n=17) y lesiones Obstructivas 47%(n=31). La PCR tuvo una utilidad diagnostica baja, la PCR a las 18 horas es la mejor prueba diagnóstica , con un mejor comportamiento del área de la curva ROC 0.74 (IC , 0.64-0.83), con sensibilidad del 16.13% (IC 95%, 1.57-30.69), especificidad del 98.26%( IC 955, 96.01-100), un valor predictivo negativo de 86.67%(IC 95%, 81.64-91.69). En el seguimiento a los 30 días no encontró nuevas hospitalizaciones de causa cardiovascular. CONCLUSIONES: Nuestro estudio muestra una utilidad diagnostico baja de la PCR en el dolor torácico de probabilidad intermedia para enfermedad coronaria, el mejor comportamiento diagnostico se encontró en la PCR a las 18 hrs con una alta especificidad y un alto Valor predictivo negativo para un valor de PCR > de 3mg/dl, siendo menor la utilidad de la PCR basal y diferencia de la PCR. diferencia de la PCR. Estos hallazgos no se correlacionaron con estudios previos. No se pudo establecer un punto de Corte de la PCR diferente a los ya existentes debido a la variabilidad de la PCR entre la población de estudio. Las limitaciones encontradas en nuestro estudio hacen necesaria la realización de un estudio multicéntrico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Antecedentes : la incidencia de sepsis neonatal tempana en prematuros es del 40% en nuestro medio, con mortalidad entre el 15-50% de los casos, el cual aumenta exponencialmente con la prematurez y se relaciona con secuelas neurológicas y pulmonares. Hasta la fecha no se han dado conclusiones definitivas en cuanto a la mejor prueba diagnóstica en este tópico, por lo anterior es de vital importancia encontrar el mejor método para su detección precoz. Objetivo: evaluar la utilidad de la histopatología placentaria en el diagnóstico de sepsis neonatal temprana de los recién nacidos menores de 36 semanas de edad gestacional, de la Fundación Santa Fe de Bogotá. Metodología: estudio de prueba diagnóstica donde se describieron los resultados obtenidos del análisis de la histopatología placentaria de los recién nacidos pretermino con sospecha de sepsis temprana comparados con el diagnostico de sepsis temprana probada. Resultados : se analizaron 114 pacientes con factores de riesgo para sepsis temprana de los cuales 74 tuvieron diagnóstico de sepsis probable confirmándose en cinco casos, encontrándose así para la histopatología placentaria sensibilidad del 100% IC 95% (90%-100%), especificidad 78.9% (IC 95% 70.7-87.0), índice de validez 79.8% (IC 95% 72%-87.6%), VPP 17.7 (IC 95%: 1.89%-33.8%) y VPN 100% (IC:98.7%-100%). Conclusión La corioamnionitis histológica es un biomarcador relevante en el diagnóstico oportuno de sepsis temprana probada en recién nacidos pretermino, ofreciendo una opción de ayuda diagnostica, no invasiva, con resultados definitivos en 24 horas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Propósito de la revisión: la enfermedad de Parkinson (EP) es un trastorno degenerativo caracterizado clínicamente por presentar temblor en reposo, rigidez y bradicinesia. El propósito es determinar la utilidad de la molécula TRODAT-1 en el diagnóstico de la EP. Metodología: se realizó una búsqueda en las bases de datos: PUBMED, COCHRANE, MEDLINE, LILACS y SCIELO en un período de 10 años desde enero de 1998 a enero de 2008. Se obtuvieron 26 artículos, estos se analizaron y se seleccionaron 10 artículos, de los cuales sólo 6 respondían a las necesidades del estudio, de acuerdo a los criterios de inclusión. De los 6 artículos analizados, 4 fueron clasificados como evidencia grado (+) y los 2 restantes evidencia grado (-) de acuerdo con las guías NICE. Todos los artículos revisados reportan una disminución importante en la captación del TRODAT-1 a nivel estriatal, su utilidad en el diagnóstico de EP en estadios tempranos, bajo costo y seguridad. Sólo tres reportan valores de sensibilidad y especificidad, pero su nivel de calidad no permite hacer una comparación de los mismos. Conclusiones: se propone realizar estudios de prueba diagnóstica comparados con el diagnóstico clínico de la enfermedad, que tengan un acuerdo en la forma de plantear las mediciones semicuantitativas de las unidades de captación utilizando las mismas fórmulas para hacerlos comparables.