931 resultados para Capture-recapture Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A modified DLTS technique is proposed for the direct measurement of capture cross-section of MOS surface states. The nature of temperature and energy dependence σn is inferred from data analysis. Temperature dependence of σn is shown to be consistent with the observed DLTS line shapes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On 30 March 2015 the Australian Federal Government launched its "Re-Think" initiative with the objective of achieving a better tax system which delivers taxes that are lower, simpler and fairer. The discussion paper released as part of the "Re:think" initiative is designed to start a national conversation on tax reform. However, inquiries into Australia's future tax system, subsequent reforms and the introduction of new taxes are nothing new. Unfortunately, recent history also demonstrates that reform initiatives arising from reviews of the Australian tax system are often deemed a failure. The most prominent of these failures in recent times is the Minerals Resource Rent Tax (MRRT), which lasted a mere 16 months before its announced repeal. Using the established theoretic framework of regulatory capture to interpret publically observable data, the purpose of this article is to explain the failure of this arguably sound tax. It concludes that the MRRT legislation itself, through the capture by the mining companies, provided internal subsidization in the form of reduced tax and minimal or no rents. In doing so, it offers an opportunity to understand and learn from past experiences to ensure that recommendations coming out of the Re:think initiative do not suffer the same fate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Common coral trout, Plectropomus leopardus Lacepede, crimson snapper, Lutjanus erythropterus Bloch, saddletail snapper, Lutjanus malabaricus (Bloch & Schneider), red emperor, Lutjanus sebae (Cuvier), redthroat emperor, Lethrinus miniatus (Schneider) and grass emperor, Lethrinus laticaudis Alleyne & Macleay, were tagged to determine the effects of barotrauma relief procedures (weighted shot-line release and venting using a hollow needle) and other factors on survival. Release condition was the most significant factor affecting the subsequent recapture rate of all species. Capture depth was significant in all species apart from L. malabaricus and L. miniatus, the general trend being reduced recapture probability with increasing capture depth. Recapture rates of fish hooked in either the lip or mouth were generally significantly higher than for those hooked in the throat or gut. Statistically significant benefit from treating fish for barotrauma was found in only L. malabaricus, but the lack of any negative effects of treating fish indicated that the practices of venting and shot-lining should not be discouraged by fisheries managers for these species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological measurements on fish sampled during the course of FRDC funded project Growth, Reproduction and Recruitment of Great Barrier Reef Food Fish Stocks (FRDC 90/18). The comma-delimited ascii file comprises the following fields: 1. Cruise number 2. Date (d-m-y) 3, Region (descriptor of part of Queensland coast or Great Barrier Reef system) 4. Reef (name or number) 5. Data source (Res=research, Rec=recreational fisher, Com=commercial fisher) 6. Capture method 7. Trap number (where appropriate) 8. Species name 9. LthStd (standard length, cm) 10. LthFrk (fork length, cm) 11. LthTot (total length, cm) 12. WtTot (approx total weight, g; weighed at sea) 13. FrameWt (weight of frame [after filleting, with viscera], g; weighed in lab) 14. Sex (macroscopic examination only) 15. GonadWt (g) Data obtained by the Department Employment, Economic Development and Innovation (formerly Primary Industries and Fisheries) between 1988 and 1993, primarily in the southern Great Barrier Reef (Capricorn-Bunker and Swain Groups), with fish traps and handlining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stay-green plants retain green leaves longer after anthesis and can have improved yield, particularly under water limitation. As senescence is a dynamic process, genotypes with different senescence patterns may exhibit similar final normalised difference vegetative index (NDVI). By monitoring NDVI from as early as awn emergence to maturity, we demonstrate that analysing senescence dynamics improves insight into genotypic stay-green variation. A senescence evaluation tool was developed to fit a logistic function to NDVI data and used to analyse data from three environments for a wheat (Triticum aestivum L.) population whose lines contrast for stay-green. Key stay-green traits were estimated including, maximum NDVI, senescence rate and a trait integrating NDVI variation after anthesis, as well as the timing from anthesis to onset, midpoint and conclusion of senescence. The integrative trait and the timing to onset and mid-senescence exhibited high positive correlations with yield and a high heritability in the three studied environments. Senescence rate was correlated with yield in some environments, whereas maximum NDVI was associated with yield in a drought-stressed environment. Where resources preclude frequent measurements, we found that NDVI measurements may be restricted to the period of rapid senescence, but caution is required when dealing with lines of different phenology. In contrast, regular monitoring during the whole period after flowering allows the estimation of senescence dynamics traits that may be reliably compared across genotypes and environments. We anticipate that selection for stay-green traits will enhance genetic progress towards high-yielding, stay-green germplasm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boron neutron capture therapy (BNCT) is a form of chemically targeted radiotherapy that utilises the high neutron capture cross-section of boron-10 isotope to achieve a preferential dose increase in the tumour. The BNCT dosimetry poses a special challenge as the radiation dose absorbed by the irradiated tissues consists of several dose different components. Dosimetry is important as the effect of the radiation on the tissue is correlated with the radiation dose. Consistent and reliable radiation dose delivery and dosimetry are thus basic requirements for radiotherapy. The international recommendations for are not directly applicable to BNCT dosimetry. The existing dosimetry guidance for BNCT provides recommendations but also calls for investigating for complementary methods for comparison and improved accuracy. In this thesis the quality assurance and stability measurements of the neutron beam monitors used in dose delivery are presented. The beam monitors were found not to be affected by the presence of a phantom in the beam and that the effect of the reactor core power distribution was less than 1%. The weekly stability test with activation detectors has been generally reproducible within the recommended tolerance value of 2%. An established toolkit for epithermal neutron beams for determination of the dose components is presented and applied in an international dosimetric intercomparison. The measured quantities (neutron flux, fast neutron and photon dose) by the groups in the intercomparison were generally in agreement within the stated uncertainties. However, the uncertainties were large, ranging from 3-30% (1 standard deviation), emphasising the importance of dosimetric intercomparisons if clinical data is to be compared between different centers. Measurements with the Exradin type 2M ionisation chamber have been repeated in the epithermal neutron beam in the same measurement configuration over the course of 10 years. The presented results exclude severe sensitivity changes to thermal neutrons that have been reported for this type of chamber. Microdosimetry and polymer gel dosimetry as complementary methods for epithermal neutron beam dosimetry are studied. For microdosimetry the comparison of results with ionisation chambers and computer simulation showed that the photon dose measured with microdosimetry was lower than with the two other methods. The disagreement was within the uncertainties. For neutron dose the simulation and microdosimetry results agreed within 10% while the ionisation chamber technique gave 10-30% lower neutron dose rates than the two other methods. The response of the BANG-3 gel was found to be linear for both photon and epithermal neutron beam irradiation. The dose distribution normalised to dose maximum measured by MAGIC polymer gel was found to agree well with the simulated result near the dose maximum while the spatial difference between measured and simulated 30% isodose line was more than 1 cm. In both the BANG-3 and MAGIC gel studies, the interpretation of the results was complicated by the presence of high-LET radiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Dispersal ability of a species is a key ecological characteristic, affecting a range of processes from adaptation, community dynamics and genetic structure, to distribution and range size. It is determined by both intrinsic species traits and extrinsic landscape-related properties. 2. Using butterflies as a model system, the following questions were addressed: (i) given similar extrinsic factors, which intrinsic species trait(s) explain dispersal ability? (ii) can one of these traits be used as a proxy for dispersal ability? (iii) the effect of interactions between the traits, and phylogenetic relatedness, on dispersal ability. 3. Four data sets, using different measures of dispersal, were compiled from published literature. The first data set uses mean dispersal distances from capture-mark-recapture studies, and the other three use mobility indices. Data for six traits that can potentially affect dispersal ability were collected: wingspan, larval host plant specificity, adult habitat specificity, mate location strategy, voltinism and flight period duration. Each data set was subjected to both unifactorial, and multifactorial, phylogenetically controlled analyses. 4. Among the factors considered, wingspan was the most important determinant of dispersal ability, although the predictive powers of regression models were low. Voltinism and flight period duration also affect dispersal ability, especially in case of temperate species. Interactions between the factors did not affect dispersal ability, and phylogenetic relatedness was significant in one data set. 5. While using wingspan as the only proxy for dispersal ability maybe problematic, it is usually the only easily accessible species-specific trait for a large number of species. It can thus be a satisfactory proxy when carefully interpreted, especially for analyses involving many species from all across the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Opportunistic selection selects the node that improves the overall system performance the most. Selecting the best node is challenging as the nodes are geographically distributed and have only local knowledge. Yet, selection must be fast to allow more time to be spent on data transmission, which exploits the selected node's services. We analyze the impact of imperfect power control on a fast, distributed, splitting based selection scheme that exploits the capture effect by allowing the transmitting nodes to have different target receive powers and uses information about the total received power to speed up selection. Imperfect power control makes the received power deviate from the target and, hence, affects performance. Our analysis quantifies how it changes the selection probability, reduces the selection speed, and leads to the selection of no node or a wrong node. We show that the effect of imperfect power control is primarily driven by the ratio of target receive powers. Furthermore, we quantify its effect on the net system throughput.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel algorithm for Virtual View Synthesis based on Non-Local Means Filtering is presented in this paper. Apart from using the video frames from the nearby cameras and the corresponding per-pixel depth map, this algorithm also makes use of the previously synthesized frame. Simple and efficient, the algorithm can synthesize video at any given virtual viewpoint at a faster rate. In the process, the quality of the synthesized frame is not compromised. Experimental results prove the above mentioned claim. The subjective and objective quality of the synthesized frames are comparable to the existing algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Opportunistic selection in multi-node wireless systems improves system performance by selecting the ``best'' node and by using it for data transmission. In these systems, each node has a real-valued local metric, which is a measure of its ability to improve system performance. Our goal is to identify the best node, which has the largest metric. We propose, analyze, and optimize a new distributed, yet simple, node selection scheme that combines the timer scheme with power control. In it, each node sets a timer and transmit power level as a function of its metric. The power control is designed such that the best node is captured even if. other nodes simultaneously transmit with it. We develop several structural properties about the optimal metric-to-timer-and-power mapping, which maximizes the probability of selecting the best node. These significantly reduce the computational complexity of finding the optimal mapping and yield valuable insights about it. We show that the proposed scheme is scalable and significantly outperforms the conventional timer scheme. We investigate the effect of. and the number of receive power levels. Furthermore, we find that the practical peak power constraint has a negligible impact on the performance of the scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Micro-fabrication technology has substantial potential for identifying molecular markers expressed on the surfaces of tissue cells and viruses. It has been found in several conceptual prototypes that cells with such markers are able to be captured by their antibodies immobilized on microchannel substrates and unbound cells are flushed out by a driven flow. The feasibility and reliability of such a microfluidic-based assay, however, remains to be further tested. In the current work, we developed a microfluidic-based system consisting of a microfluidic chip, an image grabbing unit, data acquisition and analysis software, as well as a supporting base. Specific binding of CD59-expressed or BSA-coupled human red blood cells (RBCs) to anti-CD59 or anti-BSA antibody-immobilized chip surfaces was quantified by capture efficiency and by the fraction of bound cells. Impacts of respective flow rate, cell concentration, antibody concentration and site density were tested systematically. The measured data indicated that the assay was robust. The robustness was further confirmed by capture efficiencies measured from an independent ELISA-based cell binding assay. These results demonstrated that the system developed provided a new platform to effectively quantify cellular surface markers effectively, which promoted the potential applications in both biological studies and clinical diagnoses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimates of dolphin school sizes made by observers and crew members aboard tuna seiners or by observers on ship or aerial surveys are important components of population estimates of dolphins which are involved in the yellowfin tuna fishery in the eastern Pacific. Differences in past estimates made from tuna seiners and research ships and aircraft have been noted by Brazier (1978). To compare various methods of estimating dolphin school sizes a research cruise was undertaken with the following major objectives: 1) compare estimates made by observers aboard a tuna seiner and in the ship's helicopter, from aerial photographs, and from counts made at the backdown channel, 2) compare estimates of observers who are told the count of the school size after making their estimate to the observer who is not aware of the count to determine if observers can learn to estimate more accurately, and 3) obtain movie and still photographs of dolphin schools of known size at various stages of chase, capture and release to be used for observer training. The secondary objectives of the cruise were to: 1) obtain life history specimens and data from any dolphins that were killed incidental to purse seining. These specimens and data were to be analyzed by the U.S. National Marine Fisheries Service ( NMFS ) , 2) record evasion tactics of dolphin schools by observing them from the helicopter while the seiner approached the school, 3) examine alternative methods for estimating the distance and bearing of schools where they were first sighted, 4) collect the Commission's standard cetacean sighting, set log and daily activity data and expendable bathythermograph data. (PDF contains 31 pages.)