996 resultados para Frankfurt-type examples
Resumo:
OBJECTIVE: To determine the utility of B-type natriuretic peptide (BNP) in the diagnosis of congestive heart failure (CHF) in patients presenting with dyspnea to an emergency department (ED). METHODS: Seventy patients presenting with dyspnea to an ED from April to July 2001 were included in the study. Mean age was 72±16 years and 33 (47%) were male. BNP was measured in all patients at the moment of admission to the ED. Emergency-care physicians, blinded to BNP values, were required to assign a probable initial diagnosis. A cardiologist retrospectively reviewed the data (blinded to BNP measurements) and assigned a definite diagnosis, which was considered the gold standard for assessing the diagnostic performance of BNP. RESULTS: The mean BNP concentration was higher in patients with CHF (n=36) than in those with other diagnoses (990±550 vs 80±67 pg/mL, p<0.0001). Patients with systolic dysfunction had higher BNP levels than those with preserved systolic function (1,180±641 vs 753±437 pg/mL, p=0.03). At a blood concentration of 200 pg/mL, BNP showed a sensitivity of 100%, specificity of 97.1%, positive predictive value of 97.3%, and negative predictive value of 100%. The application of BNP could have potentially corrected all 16 cases in which the diagnosis was missed by the emergency department physician. CONCLUSION: BNP measurement is a useful tool in the diagnosis of CHF in patients presenting to the ED with dyspnea.
Resumo:
Se propone detectar la presencia del agente etiológico circovirus porcino (CVP) en cerdos con diarrea. El cual se considera un patógeno emergente importante asociado a diferentes sindromes y enfermedades porcinas. Se comunicó a nivel internacional en 1997 y en Argentina fue descripto en el 2003 en animales de cria extensiva, aunque se encuentra en los diferentes sistemas de producción. Su presencia causa grandes pérdidas económicas, ya que muchas veces no tiene una sintomatología clínica, sino que se manifiesta subclinicamente. En la provincia de Córdoba no se han realizado estudios sistemáticos para determinar la presencia de circovirus. Además, en animales que presentan diarrea no se ha realizado un relevamiento de la presencia del CVP y su asociación con el cuadro clínico. Diversos investigadores estan viendo la necesidad de incluir al CVP a la hora de pensar en los diagnósticos etiológicos, en aquellos que no solo presentan índices productivos ineficientes sino también que no responden a la terapia antimicrobiana. Es agente causal del Sindrome de Desmedro Posdestete y podria favorecer la acción de otros agentes patógenos que actuarían tanto en Sistema Respirastorio como en Sistema Digestivo. La complejidad del trabajo radica en demostrar lo antes mencionado a traves de las lesiones a observar en los tejidos y la determinación de la presencia del agente. La ejecución de este estudio sería un importante aporte a la producción porcina, en donde este agente produce importantes pérdidas económicas. La metodología, así como los resultados obtenidos serán útiles en la formación de recursos humanos aplicados a la investigación y transferidos directamente a la docencia. Hipotesis: Establecer la presencia del CVP en cerdos con signos clinicos de enteropatías Objetivos: Determinar en cortes histopatológicos de intestino y ganglio mesentério, lesiones compatibles con CVP; Determinar por inmunohistoquímica la presencia del antígeno CVP en las muestras. Identificar las células en las que se encuentre el antígeno. Materiales y método: Muestreo a campo y toma de muestra. En diferentes establecimientos porcinos, de aquellos animales que presenten el cuadro clínico de diarrea, serán elegidos 3 animales aleatoriamente para sacrificio y realización de la técnica de necropsia. Se tomarán muestras de intestino delgado y ganglio mesentérico que serán fijadas en formol; la técnica histopatológica y su lectura tendrán fin de encontrar lesiones compatibles con circovirus porcino tipo II; Realización de la técnica de inmunohistoquímica con anticuerpo monoclonal, para la detección del CVP en los tejidos; Interpretación de resultados; Publicación de los resultados.
Resumo:
En el marco de la recuperación de la memoria en relación con los hechos de la última dictadura militar es importante determinar los motivos ideológico-teológicos y prácticos que dificultaron una oposición significativa por parte de la jerarquía de la iglesia a la violación de los derechos humanos, e individualizar los argumentos que impulsaron un discurso y una praxis de reconciliación que privilegió el olvido de las víctimas y apoyó acríticamente los «proyectos de olvido», como la ley de punto final, entre otros. Para analizar dichos discursos y praxis se recurre principalmente a Johann Metz, quien, vinculado a la Escuela de Frankfurt, propone una razón anamnética del sufrimiento ajeno. La originalidad del proyecto es doble, por su contenido y por su enfoque: la confrontación del «servicio de reconciliación» eclesial con la «memoria de las víctimas». Hipótesis de trabajo: el discurso y la praxis eclesial en relación al «servicio de reconciliación» realizado por el Episcopado argentino a partir de 1981, pone de manifiesto: primero, que siguieron vinculados a la idea de "nación católica" (Zanatta 1996, Dri 1997, Esquivel 2004), lo que dificultó, junto a otros factores, la visibilización de las víctimas; segundo, a su vez, analizados a la luz de los aportes filosófico-teológicos mencionados, muestran una notable carencia en la valoración de la memoria de las víctimas, esperable en una reconciliación. Objetivo general: realizar un análisis crítico de los discursos y prácticas institucionales oficiales de la Iglesia católica en Argentina en relación con la memoria de las víctimas de la última dictadura militar. Objetivos específicos: confrontar las experiencias eclesiales argentinas recientes, y sus conceptualizaciones y tipos de argumentación, con una tradición de pensamiento que en relación al acontecimiento del Holocausto sitúa en el centro de la reflexión temas como el de la memoria, el sufrimiento de las víctimas, y un modo peculiar de tratamiento de los hechos históricos; además, individualizar y analizar los argumentos que dificultaron la búsqueda de la justicia y la memoria de las víctimas. Metodología y etapas. 1° Etapa: analizar y sistematizar algunos aspectos de las teorías del conocimiento histórico y de la razón comunicativa en determinadas obras de Benjamin, Bloch y Habermas; posteriormente, precisar la apropiación conceptual de las categorías histórico-filosóficas de dichas corrientes llevada a cabo por Metz para elaborar su «memoria de las víctimas». 2° Etapa: revisar el discurso y la praxis eclesial a partir de 1981 a la luz del marco teórico ya estudiado. Será necesario, por una parte, detenerse en las declaraciones eclesiales oficiales referidas al retorno de la democracia, a las leyes de punto final y obediencia debida, como así también, en el reconocimiento y pedido de perdón por las culpas del pasado.
Resumo:
La programación concurrente es una tarea difícil aún para los más experimentados programadores. Las investigaciones en concurrencia han dado como resultado una gran cantidad de mecanismos y herramientas para resolver problemas de condiciones de carrera de datos y deadlocks, problemas que surgen por el mal uso de los mecanismos de sincronización. La verificación de propiedades interesantes de programas concurrentes presenta dificultades extras a los programas secuenciales debido al no-determinismo de su ejecución, lo cual resulta en una explosión en el número de posibles estados de programa, haciendo casi imposible un tratamiento manual o aún con la ayuda de computadoras. Algunos enfoques se basan en la creación de lenguajes de programación con construcciones con un alto nivel de abstración para expresar concurrencia y sincronización. Otros enfoques tratan de desarrollar técnicas y métodos de razonamiento para demostrar propiedades, algunos usan demostradores de teoremas generales, model-checking o algortimos específicos sobre un determinado sistema de tipos. Los enfoques basados en análisis estático liviano utilizan técnicas como interpretación abstracta para detectar ciertos tipos de errores, de una manera conservativa. Estas técnicas generalmente escalan lo suficiente para aplicarse en grandes proyectos de software pero los tipos de errores que pueden detectar es limitada. Algunas propiedades interesantes están relacionadas a condiciones de carrera y deadlocks, mientras que otros están interesados en problemas relacionados con la seguridad de los sistemas, como confidencialidad e integridad de datos. Los principales objetivos de esta propuesta es identificar algunas propiedades de interés a verificar en sistemas concurrentes y desarrollar técnicas y herramientas para realizar la verificación en forma automática. Para lograr estos objetivos, se pondrá énfasis en el estudio y desarrollo de sistemas de tipos como tipos dependientes, sistema de tipos y efectos, y tipos de efectos sensibles al flujo de datos y control. Estos sistemas de tipos se aplicarán a algunos modelos de programación concurrente como por ejemplo, en Simple Concurrent Object-Oriented Programming (SCOOP) y Java. Además se abordarán propiedades de seguridad usando sistemas de tipos específicos. Concurrent programming has remained a dificult task even for very experienced programmers. Concurrency research has provided a rich set of tools and mechanisms for dealing with data races and deadlocks that arise of incorrect use of synchronization. Verification of most interesting properties of concurrent programs is a very dificult task due to intrinsic non-deterministic nature of concurrency, resulting in a state explosion which make it almost imposible to be manually treat and it is a serious challenge to do that even with help of computers. Some approaches attempts create programming languages with higher levels of abstraction for expressing concurrency and synchronization. Other approaches try to develop reasoning methods to prove properties, either using general theorem provers, model-checking or specific algorithms on some type systems. The light-weight static analysis approach apply techniques like abstract interpretation to find certain kind of bugs in a conservative way. This techniques scale well to be applied in large software projects but the kind of bugs they may find are limited. Some interesting properties are related to data races and deadlocks, while others are interested in some security problems like confidentiality and integrity of data. The main goals of this proposal is to identify some interesting properties to verify in concurrent systems and develop techniques and tools to do full automatic verification. The main approach will be the application of type systems, as dependent types, type and effect systems, and flow-efect types. Those type systems will be applied to some models for concurrent programming as Simple Concurrent Object-Oriented Programming (SCOOP) and Java. Other goals include the analysis of security properties also using specific type systems.
Resumo:
Este proyecto se enmarca en la utlización de métodos formales (más precisamente, en la utilización de teoría de tipos) para garantizar la ausencia de errores en programas. Por un lado se plantea el diseño de nuevos algoritmos de chequeo de tipos. Para ello, se proponen nuevos algoritmos basados en la idea de normalización por evaluación que sean extensibles a otros sistemas de tipos. En el futuro próximo extenderemos resultados que hemos conseguido recientemente [16,17] para obtener: una simplificación de los trabajos realizados para sistemas sin regla eta (acá se estudiarán dos sistemas: a la Martin Löf y a la PTS), la formulación de estos chequeadores para sistemas con variables, generalizar la noción de categoría con familia utilizada para dar semántica a teoría de tipos, obtener una formulación categórica de la noción de normalización por evaluación y finalmente, aplicar estos algoritmos a sistemas con reescrituras. Para los primeros resultados esperados mencionados, nos proponemos como método adaptar las pruebas de [16,17] a los nuevos sistemas. La importancia radica en que permitirán tornar más automatizables (y por ello, más fácilmente utilizables) los asistentes de demostración basados en teoría de tipos. Por otro lado, se utilizará la teoría de tipos para certificar compiladores, intentando llevar adelante la propuesta nunca explorada de [22] de utilizar un enfoque abstracto basado en categorías funtoriales. El método consistirá en certificar el lenguaje "Peal" [29] y luego agregar sucesivamente funcionalidad hasta obtener Forsythe [23]. En este período esperamos poder agregar varias extensiones. La importancia de este proyecto radica en que sólo un compilador certificado garantiza que un programa fuente correcto se compile a un programa objeto correcto. Es por ello, crucial para todo proceso de verificación que se base en verificar código fuente. Finalmente, se abordará la formalización de sistemas con session types. Los mismos han demostrado tener fallas en sus formulaciones [30], por lo que parece conveniente su formalización. Durante la marcha de este proyecto, esperamos tener alguna formalización que dé lugar a un algoritmo de chequeo de tipos y a demostrar las propiedades usuales de los sistemas. La contribución es arrojar un poco de luz sobre estas formulaciones cuyos errores revelan que el tema no ha adquirido aún suficiente madurez o comprensión por parte de la comunidad. This project is about using type theory to garantee program correctness. It follows three different directions: 1) Finding new type-checking algorithms based on normalization by evaluation. First, we would show that recent results like [16,17] extend to other type systems like: Martin-Löf´s type theory without eta rule, PTSs, type systems with variables (in addition to systems in [16,17] which are a la de Bruijn), systems with rewrite rules. This will be done by adjusting the proofs in [16,17] so that they apply to such systems as well. We will also try to obtain a more general definition of categories with families and normalization by evaluation, formulated in categorical terms. We expect this may turn proof-assistants more automatic and useful. 2) Exploring the proposal in [22] to compiler construction for Algol-like languages using functorial categories. According to [22] such approach is suitable for verifying compiler correctness, claim which was never explored. First, the language Peal [29] will be certified in type theory and we will gradually add funtionality to it until a correct compiler for the language Forsythe [23] is obtained. 3) Formilizing systems for session types. Several proposals have shown to be faulty [30]. This means that a formalization of it may contribute to the general understanding of session types.
Resumo:
This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.
Resumo:
თბილისში მსუბუქი იონების, რადონის და გალაქტიკური კოსმოსური სხივების ნეიტრონული კომპონენტის ინტენსივობის 2009-2010 წლებში კომპლექსური მონიტორინგის მონაცემების მიხედვით გამოვლენილია მაიონიზებელი გამოსხივების ინტენსივობისა და ატმოსფეროში მსუბუქი იონების შემცველობის უკუკავშირის ეფექტი.
Resistance Exercise Restores Endothelial Function and Reduces Blood Pressure in Type 1 Diabetic Rats
Resumo:
Background: Resistance exercise effects on cardiovascular parameters are not consistent. Objectives: The effects of resistance exercise on changes in blood glucose, blood pressure and vascular reactivity were evaluated in diabetic rats. Methods: Wistar rats were divided into three groups: control group (n = 8); sedentary diabetic (n = 8); and trained diabetic (n = 8). Resistance exercise was carried out in a squat device for rats and consisted of three sets of ten repetitions with an intensity of 50%, three times per week, for eight weeks. Changes in vascular reactivity were evaluated in superior mesenteric artery rings. Results: A significant reduction in the maximum response of acetylcholine-induced relaxation was observed in the sedentary diabetic group (78.1 ± 2%) and an increase in the trained diabetic group (95 ± 3%) without changing potency. In the presence of NG-nitro-L-arginine methyl ester, the acetylcholine-induced relaxation was significantly reduced in the control and trained diabetic groups, but not in the sedentary diabetic group. Furthermore, a significant increase (p < 0.05) in mean arterial blood pressure was observed in the sedentary diabetic group (104.9 ± 5 to 126.7 ± 5 mmHg) as compared to that in the control group. However, the trained diabetic group showed a significant decrease (p < 0.05) in the mean arterial blood pressure levels (126.7 ± 5 to 105.1 ± 4 mmHg) as compared to the sedentary diabetic group. Conclusions: Resistance exercise could restore endothelial function and prevent an increase in arterial blood pressure in type 1 diabetic rats.
Functional Vascular Study in Hypertensive Subjects with Type 2 Diabetes Using Losartan or Amlodipine
Resumo:
Background: Antihypertensive drugs are used to control blood pressure (BP) and reduce macro- and microvascular complications in hypertensive patients with diabetes. Objectives: The present study aimed to compare the functional vascular changes in hypertensive patients with type 2 diabetes mellitus after 6 weeks of treatment with amlodipine or losartan. Methods: Patients with a previous diagnosis of hypertension and type 2 diabetes mellitus were randomly divided into 2 groups and evaluated after 6 weeks of treatment with amlodipine (5 mg/day) or losartan (100 mg/day). Patient evaluation included BP measurement, ambulatory BP monitoring, and assessment of vascular parameters using applanation tonometry, pulse wave velocity (PWV), and flow-mediated dilation (FMD) of the brachial artery. Results: A total of 42 patients were evaluated (21 in each group), with a predominance of women (71%) in both groups. The mean age of the patients in both groups was similar (amlodipine group: 54.9 ± 4.5 years; losartan group: 54.0 ± 6.9 years), with no significant difference in the mean BP [amlodipine group: 145 ± 14 mmHg (systolic) and 84 ± 8 mmHg (diastolic); losartan group: 153 ± 19 mmHg (systolic) and 90 ± 9 mmHg (diastolic)]. The augmentation index (30% ± 9% and 36% ± 8%, p = 0.025) and augmentation pressure (16 ± 6 mmHg and 20 ± 8 mmHg, p = 0.045) were lower in the amlodipine group when compared with the losartan group. PWV and FMD were similar in both groups. Conclusions: Hypertensive patients with type 2 diabetes mellitus treated with amlodipine exhibited an improved pattern of pulse wave reflection in comparison with those treated with losartan. However, the use of losartan may be associated with independent vascular reactivity to the pressor effect.
Resumo:
Visualistics, computer science, picture syntax, picture semantics, picture pragmatics, interactive pictures
Effects of PDE type 5 inhibitors on Left Ventricular Diastolic Dysfunction in Resistant Hypertension
Resumo:
Resistant hypertension (RHTN) is a multifactorial disease characterized by blood pressure (BP) levels above goal (140/90 mmHg) in spite of the concurrent use of three or more antihypertensive drugs of different classes. Moreover, it is well known that RHTN subjects have high prevalence of left ventricular diastolic dysfunction (LVDD), which leads to increased risk of heart failure progression. This review gathers data from studies evaluating the effects of phosphodiesterase-5 (PDE-5) inhibitors (administration of acute sildenafil and short-term tadalafil) on diastolic function, biochemical and hemodynamic parameters in patients with RHTN. Acute study with sildenafil treatment found that inhibition of PDE-5 improved hemodynamic parameters and diastolic relaxation. In addition, short-term study with the use of tadalafil demonstrated improvement of LVDD, cGMP and BNP-32 levels, regardless of BP reduction. No endothelial function changes were observed in the studies. The findings of acute and short-term studies revealed potential therapeutic effects of IPDE-5 drugs on LVDD in RHTN patients.
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Diss., 2010
Resumo:
The general properties of POISSON distributions and their relations to the binomial distribuitions are discussed. Two methods of statistical analysis are dealt with in detail: X2-test. In order to carry out the X2-test, the mean frequency and the theoretical frequencies for all classes are calculated. Than the observed and the calculated frequencies are compared, using the well nown formula: f(obs) - f(esp) 2; i(esp). When the expected frequencies are small, one must not forget that the value of X2 may only be calculated, if the expected frequencies are biger than 5. If smaller values should occur, the frequencies of neighboroughing classes must ge pooled. As a second test reintroduced by BRIEGER, consists in comparing the observed and expected error standard of the series. The observed error is calculated by the general formula: δ + Σ f . VK n-1 where n represents the number of cases. The theoretical error of a POISSON series with mean frequency m is always ± Vm. These two values may be compared either by dividing the observed by the theoretical error and using BRIEGER's tables for # or by dividing the respective variances and using SNEDECOR's tables for F. The degree of freedom for the observed error is one less the number of cases studied, and that of the theoretical error is always infinite. In carrying out these tests, one important point must never be overlloked. The values for the first class, even if no concrete cases of the type were observed, must always be zero, an dthe value of the subsequent classes must be 1, 2, 3, etc.. This is easily seen in some of the classical experiments. For instance in BORKEWITZ example of accidents in Prussian armee corps, the classes are: no, one, two, etc., accidents. When counting the frequency of bacteria, these values are: no, one, two, etc., bacteria or cultures of bacteria. Ins studies of plant diseases equally the frequencies are : no, one, two, etc., plants deseased. Howewer more complicated cases may occur. For instance, when analising the degree of polyembriony, frequently the case of "no polyembryony" corresponds to the occurrence of one embryo per each seed. Thus the classes are not: no, one, etc., embryo per seed, but they are: no additional embryo, one additional embryo, etc., per seed with at least one embryo. Another interestin case was found by BRIEGER in genetic studies on the number os rows in maize. Here the minimum number is of course not: no rows, but: no additional beyond eight rows. The next class is not: nine rows, but: 10 rows, since the row number varies always in pairs of rows. Thus the value of successive classes are: no additional pair of rows beyond 8, one additional pair (or 10 rows), two additional pairs (or 12 rows) etc.. The application of the methods is finally shown on the hand of three examples : the number of seeds per fruit in the oranges M Natal" and "Coco" and in "Calamondin". As shown in the text and the tables, the agreement with a POISSON series is very satisfactory in the first two cases. In the third case BRIEGER's error test indicated a significant reduction of variability, and the X2 test showed that there were two many fruits with 4 or 5 seeds and too few with more or with less seeds. Howewer the fact that no fruit was found without seed, may be taken to indicate that in Calamondin fruits are not fully parthenocarpic and may develop only with one seed at the least. Thus a new analysis was carried out, on another class basis. As value for the first class the following value was accepted: no additional seed beyond the indispensable minimum number of one seed, and for the later classes the values were: one, two, etc., additional seeds. Using this new basis for all calculations, a complete agreement of the observed and expected frequencies, of the correspondig POISSON series was obtained, thus proving that our hypothesis of the impossibility of obtaining fruits without any seed was correct for Calamondin while the other two oranges were completely parthenocarpic and fruits without seeds did occur.
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.