982 resultados para Natural Frequency Optimization
Resumo:
Aim: We evaluated the effectiveness of high-frequency transcutaneous electrical nerve stimulation (TENS) as a pain relief resource for primiparous puerpere who had experienced natural childbirth with an episiotomy. Methods: A controlled, randomized clinical study was conducted in a Brazilian maternity ward. Forty puerpere were randomly divided into two groups: TENS high frequency and a no treatment control group. Post-episiotomy pain was assessed in the resting and sitting positions and during ambulation. An 11-point numeric rating scale was performed in three separate evaluations (at the beginning of the study, after 60 min and after 120 min). The McGill pain questionnaire was employed at the beginning and 60 min later. TENS with 100 Hz frequency and 75 mu s pulse for 60 min was employed without causing any pain. Four electrodes ware placed in parallel near the episiotomy site, in the area of the pudendal and genitofemoral nerves. Results: An 11-point numeric rating scale and McGill pain questionnaire showed a significant statistical difference in pain reduction in the TENS group, while the control group showed no alteration in the level of discomfort. Hence, high-frequency TENS treatment significantly reduced pain intensity immediately after its use and 60 min later. Conclusion: TENS is a safe and viable non-pharmacological analgesic resource to be employed for pain relief post-episiotomy. The routine use of TENS post-episiotomy is recommended.
Resumo:
Rationale and aim: This paper has the object to present the impact of nuts' and seeds' injuries withdrawing data from the Susy Safe registry, highlighting that as for other foreign bodies the main item efficiently and substantially susceptible to changes to decrease the accidents' rates is the education of adults and children, that can be shared with parents both from pediatricians and general practitioners. Indeed labeling and age related warnings have also a fundamental relevance in prevention. Methods: The present study draws its data from the Susy Safe registry. Details on injuries are entered in the Susy Safe Web-registry through a standardized case report form, that includes information regarding: children age and gender, features of the object, circumstances of injury (presence of parents and activity) and hospitalization's details (lasting, complications and removal details). Cases are prospectively collected using the Susy Safe system from 06/2005; moreover, also information regarding past consecutive cases available in each centre adhering to the project have been entered in the Susy Safe registry. Results: Nuts and seeds are one of the most common food item retrieved in foreign bodies injuries in children. In Susy Safe registry they represent the 38% in food group, and almost the 10% in general cases. Trachea, bronchi and lungs were the main location of FB's retrieval, showing an incidence of 68%. Hospitalization occurred in 83% of cases, showing the major frequency for foreign bodies located in trachea. This location was also the principal site of complications, with a frequency of 68%. There were no significant associations between these outcomes and the age class of the children. The most common complications seen (22.4%) was bronchitis, followed by pneumonia (19.7%). Adult presence was recorded as positive in 71.2% of cases, showing an association (p value 0.009) between the adult supervision and the hospitalization outcome. On the contrary there was a non significant association between adult presence and the occurrence of complications. In 80.7% of cases, the incident happened while the child was eating. Among those cases, 88.6% interested trachea, lungs and bronchi. Conclusions: Food-related aspiration injuries are common events for young children, particularly under 4 years of age, and may lead to severe complication. There is a need to study in more depth specific characteristics of foreign bodies associated with increased hazard, such as size, shape, hardness or firmness, lubricity, pliability and elasticity, in order to better identify risky foods, and more precisely described the pathogenetic pathway. Parents are not adequately conscious and aware toward this risk; therefore, the number and severity of the injuries could be reduced by educating parents and children. Information about food safety should be included in all visits to pediatricians in order to make parents able to understand, select, and identify key characteristics of hazardous foods and better control the hazard level of various foods. Finally, preventive measures including warning labels on high-risk foods could be implemented. (C) 2012 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In the optimization or parametric analyses of risers, several configurations must be analyzed. It is laborious to perform time domain solutions for the dynamic analysis, since they are time-consuming tasks. So, frequency domain solutions appear to be a possible alternative, mainly in the early stages of a riser design. However, frequency domain analysis is linear and requires that nonlinear effects are treated. The aim of this paper is to present a possible way to treat some of these nonlinearities, using an iterative process together with an analytical correction, and compare the results of a frequency domain analysis with the those of a full nonlinear analysis. [DOI: 10.1115/1.4006149]
Resumo:
It is well established that female sex hormones have a pivotal role in inflammation. For instance, our group has previously reported that estradiol has proinflammatory actions during allergic lung response in animal models. Based on these findings, we have decided to further investigate whether T regulatory cells are affected by female sex hormones absence after ovariectomy. We evaluated by flow cytometry the frequencies of CD4+Foxp3+ T regulatory cells (Tregs) in central and peripheral lymphoid organs, such as the thymus, spleen and lymph nodes. Moreover, we have also used the murine model of allergic lung inflammation a to evaluate how female sex hormones would affect the immune response in vivo. To address that, ovariectomized or sham operated female Balb/c mice were sensitized or not with ovalbumin 7 and 14 days later and subsequently challenged twice by aerosolized ovalbumin on day 21. Besides the frequency of CD4+Foxp3+ T regulatory cells, we also measured the cytokines IL-4, IL-5, IL-10, IL-13 and IL-17 in the bronchoalveolar lavage from lungs of ovalbumine challenged groups. Our results demonstrate that the absence of female sex hormones after ovariectomy is able to increase the frequency of Tregs in the periphery. As we did not observe differences in the thymus-derived natural occurring Tregs, our data may indicate expansion or conversion of peripheral adaptive Tregs. In accordance with Treg suppressive activity, ovariectomized and ovalbumine-sensitized and challenged animals had significantly reduced lung inflammation. This was observed after cytokine analysis of lung explants showing significant reduction of pro-inflammatory cytokines, such as IL-4, IL-5, IL-13 and IL-17, associated to increased amount of IL-10. In summary, our data clearly demonstrates that OVA sensitization 7 days after ovariectomy culminates in reduced lung inflammation, which may be directly correlated with the expansion of Tregs in the periphery and further higher IL-10 secretion in the lungs.
Resumo:
Investigation on impulsive signals, originated from Partial Discharge (PD) phenomena, represents an effective tool for preventing electric failures in High Voltage (HV) and Medium Voltage (MV) systems. The determination of both sensors and instruments bandwidths is the key to achieve meaningful measurements, that is to say, obtaining the maximum Signal-To-Noise Ratio (SNR). The optimum bandwidth depends on the characteristics of the system under test, which can be often represented as a transmission line characterized by signal attenuation and dispersion phenomena. It is therefore necessary to develop both models and techniques which can characterize accurately the PD propagation mechanisms in each system and work out the frequency characteristics of the PD pulses at detection point, in order to design proper sensors able to carry out PD measurement on-line with maximum SNR. Analytical models will be devised in order to predict PD propagation in MV apparatuses. Furthermore, simulation tools will be used where complex geometries make analytical models to be unfeasible. In particular, PD propagation in MV cables, transformers and switchgears will be investigated, taking into account both irradiated and conducted signals associated to PD events, in order to design proper sensors.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Spannungsumlagerungen in Mineralen und Gesteinen induzieren in geologisch aktiven Bereichen mikromechanische und seismische Prozesse, wodurch eine schwache natürliche elektromagnetische Strahlung im Niederfrequenzbereich emittiert wird. Die elektromagnetischen Emissionen von nichtleitenden Mineralen sind auf dielektrische Polarisation durch mehrere physikalische Effekte zurückzuführen. Eine gerichtete mechanische Spannung führt zu einer ebenso gerichteten elektromagnetischen Emission. Die Quellen der elektromagnetischen Emissionen sind bekannt, jedoch können sie noch nicht eindeutig den verschiedenen Prozessen in der Natur zugeordnet werden, weshalb im Folgenden von einem seismo-elektromagnetischen Phänomen (SEM) gesprochen wird. Mit der neuentwickelten NPEMFE-Methode (Natural Pulsed Electromagnetic Field of Earth) können die elektromagnetischen Impulse ohne Bodenkontakt registriert werden. Bereiche der Erdkruste mit Spannungsumlagerungen (z.B. tektonisch aktive Störungen, potenzielle Hangrutschungen, Erdfälle, Bergsenkungen, Firstschläge) können als Anomalie erkannt und abgegrenzt werden. Basierend auf dem heutigen Kenntnisstand dieser Prozesse wurden Hangrutschungen und Locker- und Festgesteine, in denen Spannungsumlagerungen stattfinden, mit einem neuentwickelten Messgerät, dem "Cereskop", im Mittelgebirgsraum (Rheinland-Pfalz, Deutschland) und im alpinen Raum (Vorarlberg, Österreich, und Fürstentum Liechtenstein) erkundet und die gewonnenen Messergebnisse mit klassischen Verfahren aus Ingenieurgeologie, Geotechnik und Geophysik in Bezug gesetzt. Unter Feldbedingungen zeigte sich großenteils eine gute Übereinstimmung zwischen den mit dem "Cereskop" erkundeten Anomalien und den mit den konventionellen Verfahren erkundeten Spannungszonen. Auf Grundlage der bisherigen Kenntnis und unter Einbeziehung von Mehrdeutigkeiten werden die Messergebnisse analysiert und kritisch beurteilt.
Resumo:
This study is focused on radio-frequency inductively coupled thermal plasma (ICP) synthesis of nanoparticles, combining experimental and modelling approaches towards process optimization and industrial scale-up, in the framework of the FP7-NMP SIMBA European project (Scaling-up of ICP technology for continuous production of Metallic nanopowders for Battery Applications). First the state of the art of nanoparticle production through conventional and plasma routes is summarized, then results for the characterization of the plasma source and on the investigation of the nanoparticle synthesis phenomenon, aiming at highlighting fundamental process parameters while adopting a design oriented modelling approach, are presented. In particular, an energy balance of the torch and of the reaction chamber, employing a calorimetric method, is presented, while results for three- and two-dimensional modelling of an ICP system are compared with calorimetric and enthalpy probe measurements to validate the temperature field predicted by the model and used to characterize the ICP system under powder-free conditions. Moreover, results from the modeling of critical phases of ICP synthesis process, such as precursor evaporation, vapour conversion in nanoparticles and nanoparticle growth, are presented, with the aim of providing useful insights both for the design and optimization of the process and on the underlying physical phenomena. Indeed, precursor evaporation, one of the phases holding the highest impact on industrial feasibility of the process, is discussed; by employing models to describe particle trajectories and thermal histories, adapted from the ones originally developed for other plasma technologies or applications, such as DC non-transferred arc torches and powder spherodization, the evaporation of micro-sized Si solid precursor in a laboratory scale ICP system is investigated. Finally, a discussion on the role of thermo-fluid dynamic fields on nano-particle formation is presented, as well as a study on the effect of the reaction chamber geometry on produced nanoparticle characteristics and process yield.
Resumo:
Turfgrasses are ubiquitous in urban landscape and their role on carbon (C) cycle is increasing important also due to the considerable footprint related to their management practices. It is crucial to understand the mechanisms driving the C assimilation potential of these terrestrial ecosystems Several approaches have been proposed to assess C dynamics: micro-meteorological methods, small-chamber enclosure system (SC), chrono-sequence approach and various models. Natural and human-induced variables influence turfgrasses C fluxes. Species composition, environmental conditions, site characteristics, former land use and agronomic management are the most important factors considered in literature driving C sequestration potential. At the same time different approaches seem to influence C budget estimates. In order to study the effect of different management intensities on turfgrass, we estimated net ecosystem exchange (NEE) through a SC approach in a hole of a golf course in the province of Verona (Italy) for one year. The SC approach presented several advantages but also limits related to the measurement frequency, timing and duration overtime, and to the methodological errors connected to the measuring system. Daily CO2 fluxes changed according to the intensity of maintenance, likely due to different inputs and disturbances affecting biogeochemical cycles, combined also to the different leaf area index (LAI). The annual cumulative NEE decreased with the increase of the intensity of management. NEE was related to the seasonality of turfgrass, following temperatures and physiological activity. Generally on the growing season CO2 fluxes towards atmosphere exceeded C sequestered. The cumulative NEE showed a system near to a steady state for C dynamics. In the final part greenhouse gases (GHGs) emissions due to fossil fuel consumption for turfgrass upkeep were estimated, pinpointing that turfgrass may result a considerable C source. The C potential of trees and shrubs needs to be considered to obtain a complete budget.
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
INTRODUCTION: The objective was to study the effects of a novel lung volume optimization procedure (LVOP) using high-frequency oscillatory ventilation (HFOV) upon gas exchange, the transpulmonary pressure (TPP), and hemodynamics in a porcine model of surfactant depletion. METHODS: With institutional review board approval, the hemodynamics, blood gas analysis, TPP, and pulmonary shunt fraction were obtained in six anesthetized pigs before and after saline lung lavage. Measurements were acquired during pressure-controlled ventilation (PCV) prior to and after lung damage, and during a LVOP with HFOV. The LVOP comprised a recruitment maneuver with a continuous distending pressure (CDP) of 45 mbar for 2.5 minutes, and a stepwise decrease of the CDP (5 mbar every 5 minute) from 45 to 20 mbar. The TPP level was identified during the decrease in CDP, which assured a change of the PaO2/FIO2 ratio < 25% compared with maximum lung recruitment at CDP of 45 mbar (CDP45). Data are presented as the median (25th-75th percentile); differences between measurements are determined by Friedman repeated-measures analysis on ranks and multiple comparisons (Tukey's test). The level of significance was set at P < 0.05. RESULTS: The PaO2/FiO2 ratio increased from 99.1 (56.2-128) Torr at PCV post-lavage to 621 (619.4-660.3) Torr at CDP45 (CDP45) (P < 0.031). The pulmonary shunt fraction decreased from 51.8% (49-55%) at PCV post-lavage to 1.03% (0.4-3%) at CDP45 (P < 0.05). The cardiac output and stroke volume decreased at CDP45 (P < 0.05) compared with PCV, whereas the heart rate, mean arterial pressure, and intrathoracic blood volume remained unchanged. A TPP of 25.5 (17-32) mbar was required to preserve a difference in PaO2/FIO2 ratio < 25% related to CDP45; this TPP was achieved at a CDP of 35 (25-40) mbar. CONCLUSION: This HFOV protocol is easy to perform, and allows a fast determination of an adequate TPP level that preserves oxygenation. Systemic hemodynamics, as a measure of safety, showed no relevant deterioration throughout the procedure.
Resumo:
The Alps provide a high habitat diversity for plant species, structured by broad- and fine-scale abiotic site conditions. In man-made grasslands, vegetation composition is additionally affected by the type of landuse. We recorded vegetation composition in 216 parcels of grassland in 12 municipalities representing an area of 170 x 70 km in the south-eastern part of the Swiss Alps. Each parcel was characterized by a combination of altitudinal level (valley, intermediate, alp). traditional landuse (mown. grazed), current management (mown, grazed, abandoned). and Fertilization (unfertilized, fertilized). For each parcel we also assessed the abiotic factors aspect, slope, pH value, and geographic coordinates, and for each municipality annual precipitation and its cultural tradition. We analysed vegetation composition using (i) variation partitioning in RDA. (ii) cover of graminoids. non-legume forbs, and legumes, and (iii) dominance and frequency of species. Species composition was determined by, in decreasing order of variation explained. landuse, broad-scale abiotic factors, fine-scale abiotic factors. and cultural tradition. Current socio-economically motivated landuse changes, such as grazing of unfertilized former meadows or their abandonment, strongly affect vegetation composition. In our study, the frequency of characteristic meadow species was significantly smaller in grazed and even smaller in abandoned parcels than in still mown ones, suggesting less severe consequences of grazing for vegetation composition than of abandonment. Therefore. low-intensity grazing and mowing every few years should be considered valuable conservation alternatives to abandonment. Furthermore. because each landuse type was characterized by different species. a high variety of landuse types should be promoted to preserve plant species diversity in Alpine grasslands. (C) 2007 Gesellschaft fur Okologie. Published by Elsevier GmbH. All rights reserved.
Resumo:
Central Switzerland lies tectonically in an intraplate area and recurrence rates of strong earthquakes exceed the time span covered by historic chronicles. However, many lakes are present in the area that act as natural seismographs: their continuous, datable and high-resolution sediment succession allows extension of the earthquake catalogue to pre-historic times. This study reviews and compiles available data sets and results from more than 10 years of lacustrine palaeoseismological research in lakes of northern and Central Switzerland. The concept of using lacustrine mass-movement event stratigraphy to identify palaeo-earthquakes is showcased by presenting new data and results from Lake Zurich. The Late Glacial to Holocene mass-movement units in this lake document a complex history of varying tectonic and environmental impacts. Results include sedimentary evidence of three major and three minor, simultaneously triggered basin-wide lateral slope failure events interpreted as the fingerprints of palaeoseismic activity. A refined earthquake catalogue, which includes results from previous lake studies, reveals a non-uniform temporal distribution of earthquakes in northern and Central Switzerland. A higher frequency of earthquakes in the Late Glacial and Late Holocene period documents two different phases of neotectonic activity; they are interpreted to be related to isostatic post-glacial rebound and relatively recent (re-)activation of seismogenic zones, respectively. Magnitudes and epicentre reconstructions for the largest identified earthquakes provide evidence for two possible earthquake sources: (i) a source area in the region of the Alpine or Sub-Alpine Front due to release of accumulated north-west/south-east compressional stress related to an active basal thrust beneath the Aar massif; and (ii) a source area beneath the Alpine foreland due to reactivation of deep-seated strike-slip faults. Such activity has been repeatedly observed instrumentally, for example, during the most recent magnitude 4.2 and 3.5 earthquakes of February 2012, near Zug. The combined lacustrine record from northern and Central Switzerland indicates that at least one of these potential sources has been capable of producing magnitude 6.2 to 6.7 events in the past.
Resumo:
The frequency of large-scale heavy precipitation events in the European Alps is expected to undergo substantial changes with current climate change. Hence, knowledge about the past natural variability of floods caused by heavy precipitation constitutes important input for climate projections. We present a comprehensive Holocene (10,000 years) reconstruction of the flood frequency in the Central European Alps combining 15 lacustrine sediment records. These records provide an extensive catalog of flood deposits, which were generated by flood-induced underflows delivering terrestrial material to the lake floors. The multi-archive approach allows suppressing local weather patterns, such as thunderstorms, from the obtained climate signal. We reconstructed mainly late spring to fall events since ice cover and precipitation in form of snow in winter at high-altitude study sites do inhibit the generation of flood layers. We found that flood frequency was higher during cool periods, coinciding with lows in solar activity. In addition, flood occurrence shows periodicities that are also observed in reconstructions of solar activity from C-14 and Be-10 records (2500-3000, 900-1200, as well as of about 710, 500, 350, 208 (Suess cycle), 150, 104 and 87 (Gleissberg cycle) years). As atmospheric mechanism, we propose an expansion/shrinking of the Hadley cell with increasing/decreasing air temperature, causing dry/wet conditions in Central Europe during phases of high/low solar activity. Furthermore, differences between the flood patterns from the Northern Alps and the Southern Alps indicate changes in North Atlantic circulation. Enhanced flood occurrence in the South compared to the North suggests a pronounced southward position of the Westerlies and/or blocking over the northern North Atlantic, hence resembling a negative NAO state (most distinct from 4.2 to 2.4 kyr BP and during the Little Ice Age). South-Alpine flood activity therefore provides a qualitative record of variations in a paleo-NAO pattern during the Holocene. Additionally, increased South Alpine flood activity contrasts to low precipitation in tropical Central America (Cariaco Basin) on the Holocene and centennial time scale. This observation is consistent with a Holocene southward migration of the Atlantic circulation system, and hence of the ITCZ, driven by decreasing summer insolation in the Northern hemisphere, as well as with shorter-term fluctuations probably driven by solar activity. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Recent studies indicate that polymorphic genetic markers are potentially helpful in resolving genealogical relationships among individuals in a natural population. Genetic data provide opportunities for paternity exclusion when genotypic incompatibilities are observed among individuals, and the present investigation examines the resolving power of genetic markers in unambiguous positive determination of paternity. Under the assumption that the mother for each offspring in a population is unambiguously known, an analytical expression for the fraction of males excluded from paternity is derived for the case where males and females may be derived from two different gene pools. This theoretical formulation can also be used to predict the fraction of births for each of which all but one male can be excluded from paternity. We show that even when the average probability of exclusion approaches unity, a substantial fraction of births yield equivocal mother-father-offspring determinations. The number of loci needed to increase the frequency of unambiguous determinations to a high level is beyond the scope of current electrophoretic studies in most species. Applications of this theory to electrophoretic data on Chamaelirium luteum (L.) shows that in 2255 offspring derived from 273 males and 70 females, only 57 triplets could be unequivocally determined with eight polymorphic protein loci, even though the average combined exclusionary power of these loci was 73%. The distribution of potentially compatible male parents, based on multilocus genotypes, was reasonably well predicted from the allele frequency data available for these loci. We demonstrate that genetic paternity analysis in natural populations cannot be reliably based on exclusionary principles alone. In order to measure the reproductive contributions of individuals in natural populations, more elaborate likelihood principles must be deployed.