817 resultados para SAMPLE SURVEYS
Resumo:
Demersal groundfish densities were estimated by conducting a visual strip-transect survey via manned submersible on the continental shelf off Cape Flattery, Washington. The purpose of this study was to evaluate the statistical sampling power of the submersible survey as a tool to discriminate density differences between trawlable and untrawlable habitats. A geophysical map of the study area was prepared with side-scan sonar imagery, multibeam bathymetry data, and known locations of historical NMFS trawl survey events. Submersible transects were completed at randomly selected dive sites located in each habitat type. Significant differences in density between habitats were observed for lingcod (Ophiodon elongatus), yelloweye rockfish (Sebastes ruberrimus), and tiger rockfish (S. nigrocinctus) individually, and for “all rockfish” and “all flatfish” in the aggregate. Flatfish were more than ten times as abundant in the trawlable habitat samples than in the untrawlable samples, whereas rockfish as a group were over three times as abundant in the untrawlable habitat samples. Guidelines for sample sizes and implications for the estimation of the continental shelf trawl-survey habitat-bias are considered. We demonstrate an approach that can be used to establish sample size guidelines for future work by illustrating the interplay between statistical sampling power and 1) habitat specific-density differences, 2) variance of density differences, and 3) the proportion of untrawlable area in a habitat.
Resumo:
The US National Oceanic and Atmospheric Administration (NOAA) Fisheries Continuous Plankton Recorder (CPR) Survey has sampled four routes: Boston–Nova Scotia (1961–present), New York toward Bermuda (1976–present), Narragansett Bay–Mount Hope Bay–Rhode Island Sound (1998–present) and eastward of Chesapeake Bay (1974–1980). NOAA involvement began in 1974 when it assumed responsibility for the existing Boston–Nova Scotia route from what is now the UK's Sir Alister Hardy Foundation for Ocean Science (SAHFOS). Training, equipment and computer software were provided by SAHFOS to ensure continuity for this and standard protocols for any new routes. Data for the first 14 years of this route were provided to NOAA by SAHFOS. Comparison of collection methods; sample processing; and sample identification, staging and counting techniques revealed near-consistency between NOAA and SAHFOS. One departure involved phytoplankton counting standards. This has since been addressed and the data corrected. Within- and between-survey taxonomic and life-stage names and their consistency through time were, and continue to be, an issue. For this, a cross-reference table has been generated that contains the SAHFOS taxonomic code, NOAA taxonomic code, NOAA life-stage code, National Oceanographic Data Center (NODC) taxonomic code, Integrated Taxonomic Information System (ITIS) serial number and authority and consistent use/route. This table is available for review/use by other CPR surveys. Details of the NOAA and SAHFOS comparison and analytical techniques unique to NOAA are presented.
Resumo:
In recent years, wide-field sky surveys providing deep multi-band imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SN): systematic light curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 years and classified using both spectroscopy and machine learning-based photometric techniques. We develop and apply a new Bayesian model for the full multi-band evolution of each light curve in the sample. We find no evidence of a sub-population of fast-declining explosions (historically referred to as "Type IIL" SNe). However, we identify a highly significant relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for supernova cosmology, offering a standardizable candle good to an intrinsic scatter of 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light curve properties and an expanded grid of progenitor properties, are needed to enable robust progenitor inferences from multi-band light curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide field transient searches.
Resumo:
We probe the systematic uncertainties from the 113 Type Ia supernovae (SN Ia) in the Pan-STARRS1 (PS1) sample along with 197 SN Ia from a combination of low-redshift surveys. The companion paper by Rest et al. describes the photometric measurements and cosmological inferences from the PS1 sample. The largest systematic uncertainty stems from the photometric calibration of the PS1 and low-z samples. We increase the sample of observed Calspec standards from 7 to 10 used to define the PS1 calibration system. The PS1 and SDSS-II calibration systems are compared and discrepancies up to ∼0.02 mag are recovered. We find uncertainties in the proper way to treat intrinsic colors and reddening produce differences in the recovered value of w up to 3%. We estimate masses of host galaxies of PS1 supernovae and detect an insignificant difference in distance residuals of the full sample of 0.037 ± 0.031 mag for host galaxies with high and low masses. Assuming flatness and including systematic uncertainties in our analysis of only SNe measurements, we find w = -1.120+0.360-0.206(Stat)+0.269-0.291(Sys). With additional constraints from Baryon acoustic oscillation, cosmic microwave background (CMB) (Planck) and H0 measurements, we find w = -1.166+0.072-0.069 and Ωm = 0.280+0.013-0.012 (statistical and systematic errors added in quadrature). The significance of the inconsistency with w = -1 depends on whether we use Planck or Wilkinson Microwave Anisotropy Probe measurements of the CMB: wBAO+H0+SN+WMAP = -1.124+0.083-0.065.
Resumo:
BACKGROUND: The escalating prevalence of obesity might prompt obese subjects to consider themselves as normal, as this condition is gradually becoming as frequent as normal weight. In this study, we aimed to assess the trends in the associations between obesity and self-rated health in two countries. METHODS: Data from the Portuguese (years 1995-6, 1998-6 and 2005-6) and Swiss (1992-3, 1997, 2002 and 2007) National Health Surveys were used, corresponding to more than 130,000 adults (64,793 for Portugal and 65,829 for Switzerland). Body mass index and self-rated health were derived from self-reported data. RESULTS: Obesity levels were higher in Portugal (17.5% in 2005-6 vs. 8.9% in 2007 in Switzerland, p < 0.001) and increased in both countries. The prevalence of participants rating their health as "bad" or "very bad" was higher in Portugal than in Switzerland (21.8% in 2005-6 vs 3.9% in 2007, p < 0.001). In both countries, obese participants rated more frequently their health as "bad" or "very bad" than participants with regular weight. In Switzerland, the prevalence of "bad" or "very bad" rates among obese participants, increased from 6.5% in 1992-3 to 9.8% in 2007, while in Portugal it decreased from 41.3% to 32.3%. After multivariate adjustment, the odds ratio (OR) of stating one self's health as "bad" or "very bad" among obese relative to normal weight participants, almost doubled in Switzerland: from 1.38 (95% confidence interval, CI: 1.01-1.87) in 1992-3 to 2.64 (95% CI: 2.14-3.26) in 2007, and similar findings were obtained after sample weighting. Conversely, no such trend was found in Portugal: 1.35 (95% CI: 1.23-1.48) in 1995-6 and 1.52 (95% CI: 1.37-1.70) in 2005-6. CONCLUSION: Obesity is increasing in Switzerland and Portugal. Obesity is increasingly associated with poorer self-health ratings in Switzerland but not in Portugal.
Resumo:
We evaluate a number of real estate sentiment indices to ascertain current and forward-looking information content that may be useful for forecasting the demand and supply activities. Our focus lies on sector-specific surveys targeting the players from the supply-side of both residential and non-residential real estate markets. Analyzing the dynamic relationships within a Vector Auto-Regression (VAR) framework, we test the efficacy of these indices by comparing them with other coincident indicators in predicting real estate returns. Overall, our analysis suggests that sentiment indicators convey important information which should be embedded in the modeling exercise to predict real estate market returns. Generally, sentiment indices show better information content than broad economic indicators. The goodness of fit of our models is higher for the residential market than for the non-residential real estate sector. The impulse responses, in general, conform to our theoretical expectations. Variance decompositions and out-of-sample predictions generally show desired contribution and reasonable improvement respectively, thus upholding our hypothesis. Quite remarkably, consistent with the theory, the predictability swings when we look through different phases of the cycle. This perhaps suggests that, e.g. during recessions, market players’ expectations may be more accurate predictor of the future performances, conceivably indicating a ‘negative’ information processing bias and thus conforming to the precautionary motive of consumer behaviour.
Resumo:
Background Appropriately conducted adaptive designs (ADs) offer many potential advantages over conventional trials. They make better use of accruing data, potentially saving time, trial participants, and limited resources compared to conventional, fixed sample size designs. However, one can argue that ADs are not implemented as often as they should be, particularly in publicly funded confirmatory trials. This study explored barriers, concerns, and potential facilitators to the appropriate use of ADs in confirmatory trials among key stakeholders. Methods We conducted three cross-sectional, online parallel surveys between November 2014 and January 2015. The surveys were based upon findings drawn from in-depth interviews of key research stakeholders, predominantly in the UK, and targeted Clinical Trials Units (CTUs), public funders, and private sector organisations. Response rates were as follows: 30(55 %) UK CTUs, 17(68 %) private sector, and 86(41 %) public funders. A Rating Scale Model was used to rank barriers and concerns in order of perceived importance for prioritisation. Results Top-ranked barriers included the lack of bridge funding accessible to UK CTUs to support the design of ADs, limited practical implementation knowledge, preference for traditional mainstream designs, difficulties in marketing ADs to key stakeholders, time constraints to support ADs relative to competing priorities, lack of applied training, and insufficient access to case studies of undertaken ADs to facilitate practical learning and successful implementation. Associated practical complexities and inadequate data management infrastructure to support ADs were reported as more pronounced in the private sector. For funders of public research, the inadequate description of the rationale, scope, and decision-making criteria to guide the planned AD in grant proposals by researchers were all viewed as major obstacles. Conclusions There are still persistent and important perceptions of individual and organisational obstacles hampering the use of ADs in confirmatory trials research. Stakeholder perceptions about barriers are largely consistent across sectors, with a few exceptions that reflect differences in organisations’ funding structures, experiences and characterisation of study interventions. Most barriers appear connected to a lack of practical implementation knowledge and applied training, and limited access to case studies to facilitate practical learning. Keywords: Adaptive designs; flexible designs; barriers; surveys; confirmatory trials; Phase 3; clinical trials; early stopping; interim analyses
Resumo:
The interest in the systematic analysis of astronomical time series data, as well as development in astronomical instrumentation and automation over the past two decades has given rise to several questions of how to analyze and synthesize the growing amount of data. These data have led to many discoveries in the areas of modern astronomy asteroseismology, exoplanets and stellar evolution. However, treatment methods and data analysis have failed to follow the development of the instruments themselves, although much effort has been done. In present thesis, we propose new methods of data analysis and two catalogs of the variable stars that allowed the study of rotational modulation and stellar variability. Were analyzed the photometric databases fromtwo distinctmissions: CoRoT (Convection Rotation and planetary Transits) and WFCAM (Wide Field Camera). Furthermore the present work describes several methods for the analysis of photometric data besides propose and refine selection techniques of data using indices of variability. Preliminary results show that variability indices have an efficiency greater than the indices most often used in the literature. An efficient selection of variable stars is essential to improve the efficiency of all subsequent steps. Fromthese analyses were obtained two catalogs; first, fromtheWFCAMdatabase we achieve a catalog with 319 variable stars observed in the photometric bands Y ZJHK. These stars show periods ranging between ∼ 0, 2 to ∼ 560 days whose the variability signatures present RR-Lyrae, Cepheids , LPVs, cataclysmic variables, among many others. Second, from the CoRoT database we selected 4, 206 stars with typical signatures of rotationalmodulation, using a supervised process. These stars show periods ranging between ∼ 0, 33 to ∼ 92 days, amplitude variability between ∼ 0, 001 to ∼ 0, 5 mag, color index (J - H) between ∼ 0, 0 to ∼ 1, 4 mag and spectral type CoRoT FGKM. The WFCAM variable stars catalog is being used to compose a database of light curves to be used as template in an automatic classifier for variable stars observed by the project VVV (Visible and Infrared Survey Telescope for Astronomy) moreover it are a fundamental start point to study different scientific cases. For example, a set of 12 young stars who are in a star formation region and the study of RR Lyrae-whose properties are not well established in the infrared. Based on CoRoT results we were able to show, for the first time, the rotational modulation evolution for an wide homogeneous sample of field stars. The results are inagreement with those expected by the stellar evolution theory. Furthermore, we identified 4 solar-type stars ( with color indices, spectral type, luminosity class and rotation period close to the Sun) besides 400 M-giant stars that we have a special interest to forthcoming studies. From the solar-type stars we can describe the future and past of the Sun while properties of M-stars are not well known. Our results allow concluded that there is a high dependence of the color-period diagram with the reddening in which increase the uncertainties of the age-period realized by previous works using CoRoT data. This thesis provides a large data-set for different scientific works, such as; magnetic activity, cataclysmic variables, brown dwarfs, RR-Lyrae, solar analogous, giant stars, among others. For instance, these data will allow us to study the relationship of magnetic activitywith stellar evolution. Besides these aspects, this thesis presents an improved classification for a significant number of stars in the CoRoT database and introduces a new set of tools that can be used to improve the entire process of the photometric databases analysis
Resumo:
Traditional methods of submerged aquatic vegetation (SAV) survey last long and then, they are high cost. Optical remote sensing is an alternative, but it has some limitations in the aquatic environment. The use of echosounder techniques is efficient to detect submerged targets. Therefore, the aim of this study is to evaluate different kinds of interpolation approach applied on SAV sample data collected by echosounder. This study case was performed in a region of Uberaba River - Brazil. The interpolation methods evaluated in this work follow: Nearest Neighbor, Weighted Average, Triangular Irregular Network (TIN) and ordinary kriging. Better results were carried out with kriging interpolation. Thus, it is recommend the use of geostatistics for spatial inference of SAV from sample data surveyed with echosounder techniques. © 2012 IEEE.
Resumo:
In soil surveys, several sampling systems can be used to define the most representative sites for sample collection and description of soil profiles. In recent years, the conditioned Latin hypercube sampling system has gained prominence for soil surveys. In Brazil, most of the soil maps are at small scales and in paper format, which hinders their refinement. The objectives of this work include: (i) to compare two sampling systems by conditioned Latin hypercube to map soil classes and soil properties; (II) to retrieve information from a detailed scale soil map of a pilot watershed for its refinement, comparing two data mining tools, and validation of the new soil map; and (III) to create and validate a soil map of a much larger and similar area from the extrapolation of information extracted from the existing soil map. Two sampling systems were created by conditioned Latin hypercube and by the cost-constrained conditioned Latin hypercube. At each prospection place, soil classification and measurement of the A horizon thickness were performed. Maps were generated and validated for each sampling system, comparing the efficiency of these methods. The conditioned Latin hypercube captured greater variability of soils and properties than the cost-constrained conditioned Latin hypercube, despite the former provided greater difficulty in field work. The conditioned Latin hypercube can capture greater soil variability and the cost-constrained conditioned Latin hypercube presents great potential for use in soil surveys, especially in areas of difficult access. From an existing detailed scale soil map of a pilot watershed, topographical information for each soil class was extracted from a Digital Elevation Model and its derivatives, by two data mining tools. Maps were generated using each tool. The more accurate of these tools was used for extrapolation of soil information for a much larger and similar area and the generated map was validated. It was possible to retrieve the existing soil map information and apply it on a larger area containing similar soil forming factors, at much low financial cost. The KnowledgeMiner tool for data mining, and ArcSIE, used to create the soil map, presented better results and enabled the use of existing soil map to extract soil information and its application in similar larger areas at reduced costs, which is especially important in development countries with limited financial resources for such activities, such as Brazil.
Resumo:
The SBBrasil 2010 Project (SBB10) was designed as a nationwide oral health epidemiological survey within a health surveillance strategy. This article discusses methodological aspects of the SBB10 Project that can potentially help expand and develop knowledge in the health field. This was a nationwide survey with stratified multi-stage cluster sampling. The sample domains were 27 State capitals and 150 rural municipalities (counties) from the country's five major geographic regions. The sampling units were census tracts and households for the State capitals and municipalities, census tracts, and households for the rural areas. Thirty census tracts were selected in the State capitals and 30 municipalities in the countryside. The precision considered the demographic domains grouped by density of the overall population and the internal variability of oral health indices. The study evaluated dental caries, periodontal disease, malocclusion, fluorosis, tooth loss, and dental trauma in five age groups (5, 12, 15-19, 35-44, and 65-74 years).
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
Exposure to farming environments has been shown to protect substantially against asthma and atopic disease across Europe and in other parts of the world. The GABRIEL Advanced Surveys (GABRIELA) were conducted to determine factors in farming environments which are fundamental to protecting against asthma and atopic disease. The GABRIEL Advanced Surveys have a multi-phase stratified design. In a first-screening phase, a comprehensive population-based survey was conducted to assess the prevalence of exposure to farming environments and of asthma and atopic diseases (n = 103,219). The second phase was designed to ascertain detailed exposure to farming environments and to collect biomaterial and environmental samples in a stratified random sample of phase 1 participants (n = 15,255). A third phase was carried out in a further stratified sample only in Bavaria, southern Germany, aiming at in-depth respiratory disease and exposure assessment including extensive environmental sampling (n = 895). Participation rates in phase 1 were around 60% but only about half of the participating study population consented to further study modules in phase 2. We found that consenting behaviour was related to familial allergies, high parental education, wheeze, doctor diagnosed asthma and rhinoconjunctivitis, and to a lesser extent to exposure to farming environments. The association of exposure to farm environments with asthma or rhinoconjunctivitis was not biased by participation or consenting behaviour. The GABRIEL Advanced Surveys are one of the largest studies to shed light on the protective 'farm effect' on asthma and atopic disease. Bias with regard to the main study question was able to be ruled out by representativeness and high participation rates in phases 2 and 3. The GABRIEL Advanced Surveys have created extensive collections of questionnaire data, biomaterial and environmental samples promising new insights into this area of research.
Resumo:
Project MYTRI (Mobilizing Youth for Tobacco-Related Initiatives in India) was a large 2-year randomized school-based trial with a goal to reduce and prevent tobacco use among students in 6th and 8th grades in Delhi and Chennai in India (n=32 schools). Baseline analyses in 2004 showed that 6th grade students reported more tobacco use than 8 th grade students, opposite of what is typically observed in developed countries like the US. The present study aims to study differences in tobacco use and psychosocial risk factors between the 6th grade cohort and 8th grade cohort, in a compliant sub-sample of control students that were present at all 3 surveys from 2004-06. Both in 2004 and 2005, 6th grade cohort reported significantly greater prevalence of ever use of all tobacco products (cigarettes, bidis, chewing tobacco, any tobacco). These significant differences in ever use of any tobacco between cohorts were maintained by gender, city and socioeconomic status. The 6th grade cohort also reported significantly greater prevalence of current use of tobacco products (cigarettes, chewing tobacco, any tobacco) in 2004. Similar findings were observed for psychosocial risk factors for tobacco use, where the 6th grade cohort scored higher risk than 8th grade cohort on scales for intentions to smoke or chew tobacco and susceptibility to smoke or chew tobacco in 2004 and 2005, and for knowledge of health effects of tobacco in all three years.^ The evidence of early initiation of tobacco use in our 6th grade cohort in India indicates the need to target prevention programs and other tobacco control measures from a younger age in this setting. With increasing proportions of total deaths and lost DALYs in India being attributable to chronic diseases, addressing tobacco use among younger cohorts is even more critical. Increase in tobacco use among youth is a cause for concern with respect to future burden of chronic disease and tobacco-related mortality in many developing countries. Similarly, epidemiological studies that aim to predict future death and disease burden due to tobacco should address the early age at initiation and increasing prevalence rates among younger populations. ^
Resumo:
The measurements were obtained during two North Sea wide STAR-shaped cruises during summer 1986 and winter 1987, which were performed to investigate the circulation induced transport and biologically induced pollutant transfer within the interdisciplinary research in the project "ZISCH - Zirkulation und Schadstoffumsatz in der Nordsee / Circulation and Contaminant Fluxes in the North Sea (1984-1989)". The inventory presents parameters measured on hydrodynamics, nutrient dynamics, ecosystem dynamics and pollutant dynamics in the pelagic and benthic realm. The research program had the objective of quantifying fluxes of major budgets, especially contaminants in the North Sea. In spring 1986, following the phytoplankton spring bloom, and in late winter 1987, at minimum primary production activity, the North Sea ecosystem was investigated on a station net covering the whole North Sea. The station net was shaped like a star. Sampling started in the centre, followed by the northwest section and moving counter clockwise around the North Sea following the residual currents. By this strategy, a time series was measured in the central North Sea and more synoptic data sets were obtained in the individual sections. Generally advection processes have to be considered when comparing the data from different stations. The entire sampling period lasted for more than six weeks in each cruise. Thus, a time-lag should be considered especially when comparing the data from the eastern and the western part of the central and northern North Sea, where samples were taken at the beginning and at the end of the campaign. The ZISCH investigations represented a qualitatively and quantitatively new approach to North Sea research in several respects. (1) The first simultaneous blanket coverage of all important biological, chemical and physical parameters in the entire North Sea ecosystem; (2) the first simultaneous measurements of major contaminants (metals and organohaline compounds) in the different ecosystem compartments; (3) simultaneous determinations of atmospheric inputs of momentum, energy and matter as important ecosystem boundary conditions; (4) performance of the complex measurement program during two seasons, namely the spring plankton bloom and the subsequent winter period of minimal biological activity; and (5) support of data analysis and interpretation by oceanographic and meteorological numerical models on the same scales.