908 resultados para unknown-input estimation
Resumo:
We show how polarization measurements on the output fields generated by parametric down conversion will reveal a violation of multiparticle Bell inequalities, in the regime of both low- and high-output intensity. In this case, each spatially separated system, upon which a measurement is performed, is comprised of more than one particle. In view of the formal analogy with spin systems, the proposal provides an opportunity to test the predictions of quantum mechanics for spatially separated higher spin states. Here the quantum behavior possible even where measurements are performed on systems of large quantum (particle) number may be demonstrated. Our proposal applies to both vacuum-state signal and idler inputs, and also to the quantum-injected parametric amplifier as studied by De Martini The effect of detector inefficiencies is included, and weaker Bell-Clauser-Horne inequalities are derived to enable realistic tests of local hidden variables with auxiliary assumptions for the multiparticle situation.
Resumo:
The technique of permanently attaching interdigital transducers (IDT) to either flat or curved structural surfaces to excite single Lamb wave mode has demonstrated great potential for quantitative non-destructive evaluation and smart materials design, In this paper, the acoustic wave field in a composite laminated plate excited by an IDT is investigated. On the basis of discrete layer theory and a multiple integral transform method, an analytical-numerical approach is developed to evaluate the surface velocity response of the plate due to the IDTs excitation. In this approach, the frequency spectrum and wave number spectrum of the output of IDT are obtained directly. The corresponding time domain results are calculated by applying a standard inverse fast Fourier transformation technique. Numerical examples are presented to validate the developed method and show the ability of mode selection and isolation. A new effective way of transfer function estimation and interpretation is presented by considering the input wave number spectrum in addition to the commonly used input frequency spectrum. The new approach enables the simple physical evaluation of the influences of IDT geometrical features such as electrode finger widths and overall dimension and excitation signal properties on the input-output characteristics of IDT. Finally, considering the convenience of Mindlin plate wave theory in numerical computations as well as theoretical analysis, the validity is examined of using this approximate theory to design IDT for the excitation of the first and second anti-symmetric Lamb modes. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Introduction Bioelectrical impedance analysis (BIA) is a useful field measure to estimate total body water (TBW). No prediction formulae have been developed or validated against a reference method in patients with pancreatic cancer. The aim of this study was to assess the agreement between three prediction equations for the estimation of TBW in cachectic patients with pancreatic cancer. Methods Resistance was measured at frequencies of 50 and 200 kHz in 18 outpatients (10 males and eight females, age 70.2 +/- 11.8 years) with pancreatic cancer from two tertiary Australian hospitals. Three published prediction formulae were used to calculate TBW - TBWs developed in surgical patients, TBWca-uw and TBWca-nw developed in underweight and normal weight patients with end-stage cancer. Results There was no significant difference in the TBW estimated by the three prediction equations - TBWs 32.9 +/- 8.3 L, TBWca-nw 36.3 +/- 7.4 L, TBWca-uw 34.6 +/- 7.6 L. At a population level, there is agreement between prediction of TBW in patients with pancreatic cancer estimated from the three equations. The best combination of low bias and narrow limits of agreement was observed when TBW was estimated from the equation developed in the underweight cancer patients relative to the normal weight cancer patients. When no established BIA prediction equation exists, practitioners should utilize an equation developed in a population with similar critical characteristics such as diagnosis, weight loss, body mass index and/or age. Conclusions Further research is required to determine the accuracy of the BIA prediction technique against a reference method in patients with pancreatic cancer.
Resumo:
Input-driven models provide an explicit and readily testable account of language learning. Although we share Ellis's view that the statistical structure of the linguistic environment is a crucial and, until recently, relatively neglected variable in language learning, we also recognize that the approach makes three assumptions about cognition and language learning that are not universally shared. The three assumptions concern (a) the language learner as an intuitive statistician, (b) the constraints on what constitute relevant surface cues, and (c) the redescription problem faced by any system that seeks to derive abstract grammatical relations from the frequency of co-occurring surface forms and functions. These are significant assumptions that must be established if input-driven models are to gain wider acceptance. We comment on these issues and briefly describe a distributed, instance-based approach that retains the key features of the input-driven account advocated by Ellis but that also addresses shortcomings of the current approaches.
Resumo:
Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
Agriculture in limited resource areas is characterized by small farms which an generally too small to adequately support the needs of an average farm family. The farming operation can be described as a low input cropping system with the main energy source being manual labor, draught animals and in some areas hand tractors. These farming systems are the most important contributor to the national economy of many developing countries. The role of tillage is similar in dryland agricultural systems in both the high input (HICS) and low input cropping systems (LICS), however, wet cultivation or puddling is unique to lowland rice-based systems in low input cropping systems. Evidence suggest that tillage may result in marginal increases in crop yield in the short term, however, in the longer term it may be neutral or give rise to yield decreases associated with soil structural degradation. On marginal soils, tillage may be required to prepare suitable seedbeds or to release adequate Nitrogen through mineralization, but in the longer term, however, tillage reduces soil organic matter content, increases soil erodibility and the emission of greenhouse gases. Tillage in low input cropping systems involves a very large proportion of the population and any changes: in current practices such as increased mechanization will have a large social impact such as increased unemployment and increasing feminization of poverty, as mechanization may actually reduce jobs for women. Rapid mechanization is likely to result in failures, but slower change, accompanied by measures to provide alternative rural employment, might be beneficial. Agriculture in limited resource areas must produce the food and fiber needs of their community, and its future depends on the development of sustainable tillage/cropping systems that are suitable for the soil and climatic conditions. These should be based on sound biophysical principles and meet the needs of and he acceptable to the farming communities. Some of the principle requirements for a sustainable system includes the maintenance of soil health, an increase in the rain water use efficiency of the system, increased use of fertilizer and the prevention of erosion. The maintenance of crop residues on the surface is paramount for meeting these requirements, and the competing use of crop residues must be met from other sources. These requirements can be met within a zonal tillage system combined with suitable agroforestry, which will reduce the need for crop residues. It is, however, essential that farmers participate in the development of any new technologies to ensure adoption of the new system. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.