865 resultados para Multi-Point Method
Resumo:
Anticancer drugs typically are administered in the clinic in the form of mixtures, sometimes called combinations. Only in rare cases, however, are mixtures approved as drugs. Rather, research on mixtures tends to occur after single drugs have been approved. The goal of this research project was to develop modeling approaches that would encourage rational preclinical mixture design. To this end, a series of models were developed. First, several QSAR classification models were constructed to predict the cytotoxicity, oral clearance, and acute systemic toxicity of drugs. The QSAR models were applied to a set of over 115,000 natural compounds in order to identify promising ones for testing in mixtures. Second, an improved method was developed to assess synergistic, antagonistic, and additive effects between drugs in a mixture. This method, dubbed the MixLow method, is similar to the Median-Effect method, the de facto standard for assessing drug interactions. The primary difference between the two is that the MixLow method uses a nonlinear mixed-effects model to estimate parameters of concentration-effect curves, rather than an ordinary least squares procedure. Parameter estimators produced by the MixLow method were more precise than those produced by the Median-Effect Method, and coverage of Loewe index confidence intervals was superior. Third, a model was developed to predict drug interactions based on scores obtained from virtual docking experiments. This represents a novel approach for modeling drug mixtures and was more useful for the data modeled here than competing approaches. The model was applied to cytotoxicity data for 45 mixtures, each composed of up to 10 selected drugs. One drug, doxorubicin, was a standard chemotherapy agent and the others were well-known natural compounds including curcumin, EGCG, quercetin, and rhein. Predictions of synergism/antagonism were made for all possible fixed-ratio mixtures, cytotoxicities of the 10 best-scoring mixtures were tested, and drug interactions were assessed. Predicted and observed responses were highly correlated (r2 = 0.83). Results suggested that some mixtures allowed up to an 11-fold reduction of doxorubicin concentrations without sacrificing efficacy. Taken together, the models developed in this project present a general approach to rational design of mixtures during preclinical drug development. ^
Resumo:
The purpose of this study is to examine the stages of program realization of the interventions that the Bronx Health REACH program initiated at various levels to improve nutrition as a means for reducing racial and ethnic disparities in diabetes. This study was based on secondary analyses of qualitative data collected through the Bronx Health REACH Nutrition Project, a project conducted under the auspices of the Institute on Urban Family Health, with support from the Centers for Disease Control and Prevention (CDC). Local human subjects' review and approval through the Institute on Urban Family Health was required and obtained in order to conduct the Bronx Health REACH Nutrition Project. ^ The study drew from two theoretical models—Glanz and colleagues' nutrition environments model and Shediac-Rizkallah and Bone's sustainability model. The specific study objectives were two-fold: (1) to categorize each nutrition activity to a specific dimension (i.e. consumer, organizational or community nutrition environment); and (2) to evaluate the stage at which the program has been realized (i.e. development, implementation or sustainability). ^ A case study approach was applied and a constant comparative method was used to analyze the data. Triangulation of data based was also conducted. Qualitative data from this study revealed the following principal findings: (1) communities of color are disproportionately experiencing numerous individual and environmental factors contributing to the disparities in diabetes; (2) multi-level strategies that targeted the individual, organizational and community nutrition environments can appropriately address these contributing factors; (3) the nutrition strategies greatly varied in their ability to appropriately meet criteria for the three program stages; and (4) those nutrition strategies most likely to succeed (a) conveyed consistent and culturally relevant messages, (b) had continued involvement from program staff and partners, (c) were able to adapt over time or setting, (d) had a program champion and a training component, (e) were integrated into partnering organizations, and (f) were perceived to be successful by program staff and partners in their efforts to create individual, organizational and community/policy change. As a result of the criteria-based assessment and qualitative findings, an ecological framework elaborating on Glanz and colleagues model was developed. The qualitative findings and the resulting ecological framework developed from this study will help public health professionals and community leaders to develop and implement sustainable multi-level nutrition strategies for addressing racial and ethnic disparities in diabetes. ^
Resumo:
Purpose. Fluorophotometry is a well validated method for assessing corneal permeability in human subjects. However, with the growing importance of basic science animal research in ophthalmology, fluorophotometry’s use in animals must be further evaluated. The purpose of this study was to evaluate corneal epithelial permeability following desiccating stress using the modified Fluorotron Master™. ^ Methods. Corneal permeability was evaluated prior to and after subjecting 6-8 week old C57BL/6 mice to experimental dry eye (EDE) for 2 and 5 days (n=9/time point). Untreated mice served as controls. Ten microliters of 0.001% sodium fluorescein (NaF) were instilled topically into each mouse’s left eye to create an eye bath, and left to permeate for 3 minutes. The eye bath was followed by a generous wash with Buffered Saline Solution (BSS) and alignment with the Fluorotron Master™. Seven corneal scans using the Fluorotron Master were performed during 15 minutes (1 st post-wash scans), followed by a second wash using BSS and another set of five corneal scans (2nd post-wash scans) during the next 15 minutes. Corneal permeability was calculated using data calculated with the FM™ Mouse software. ^ Results. When comparing the difference between the Post wash #1 scans within the group and the Post wash #2 scans within the group using a repeated measurement design, there was a statistical difference in the corneal fluorescein permeability of the Post-wash #1 scans after 5 days (1160.21±108.26 vs. 1000.47±75.56 ng/mL, P<0.016 for UT-5 day comparison 8 [0.008]), but not after only 2 days of EDE compared to Untreated mice (1115.64±118.94 vs. 1000.47±75.56 ng/mL, P>0.016 for UT-2 day comparison [0.050]). There was no statistical difference between the 2 day and 5 day Post wash #1 scans (P=.299). The Post-wash #2 scans demonstrated that EDE caused a significant NaF retention at both 2 and 5 days of EDE compared to baseline, untreated controls (1017.92±116.25, 1015.40±120.68 vs. 528.22±127.85 ng/mL, P<0.05 [0.0001 for both]). There was no statistical difference between the 2 day and 5 day Post wash #2 scans (P=.503). The comparison between the Untreated post wash #1 with untreated post wash #2 scans using a Paired T-test showed a significant difference between the two sets of scans (P=0.000). There is also a significant difference between the 2 day comparison and the 5 day comparison (P values = 0.010 and 0.002, respectively). ^ Conclusion. Desiccating stress increases permeability of the corneal epithelium to NaF, and increases NaF retention in the corneal stroma. The Fluorotron Master is a useful and sensitive tool to evaluate corneal permeability in murine dry eye, and will be a useful tool to evaluate the effectiveness of dry eye treatments in animal-model drug trials.^
Resumo:
In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^
Resumo:
Background. Research into methods for recovery from fatigue due to exercise is a popular topic among sport medicine, kinesiology and physical therapy. However, both the quantity and quality of studies and a clear solution of recovery are lacking. An analysis of the statistical methods in the existing literature of performance recovery can enhance the quality of research and provide some guidance for future studies. Methods: A literature review was performed using SCOPUS, SPORTDiscus, MEDLINE, CINAHL, Cochrane Library and Science Citation Index Expanded databases to extract the studies related to performance recovery from exercise of human beings. Original studies and their statistical analysis for recovery methods including Active Recovery, Cryotherapy/Contrast Therapy, Massage Therapy, Diet/Ergogenics, and Rehydration were examined. Results: The review produces a Research Design and Statistical Method Analysis Summary. Conclusion: Research design and statistical methods can be improved by using the guideline from the Research Design and Statistical Method Analysis Summary. This summary table lists the potential issues and suggested solutions, such as, sample size calculation, sports specific and research design issues consideration, population and measure markers selection, statistical methods for different analytical requirements, equality of variance and normality of data, post hoc analyses and effect size calculation.^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^
Resumo:
The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.
Resumo:
This paper assesses the along strike variation of active bedrock fault scarps using long range terrestrial laser scanning (t-LiDAR) data in order to determine the distribution behaviour of scarp height and the subsequently calculate long term throw-rates. Five faults on Cretewhich display spectacular limestone fault scarps have been studied using high resolution digital elevation model (HRDEM) data. We scanned several hundred square metres of the fault system including the footwall, fault scarp and hanging wall of the investigated fault segment. The vertical displacement and the dip of the scarp were extracted every metre along the strike of the detected fault segment based on the processed HRDEM. The scarp variability was analysed by using statistical and morphological methods. The analysis was done in a geographical information system (GIS) environment. Results show a normal distribution for the scanned fault scarp's vertical displacement. Based on these facts, the mean value of height was chosen to define the authentic vertical displacement. Consequently the scarp can be divided into above, below and within the range of mean (within one standard deviation) and quantify the modifications of vertical displacement. Therefore, the fault segment can be subdivided into areas which are influenced by external modification like erosion and sedimentation processes. Moreover, to describe and measure the variability of vertical displacement along strike the fault, the semi-variance was calculated with the variogram method. This method is used to determine how much influence the external processes have had on the vertical displacement. By combining of morphological and statistical results, the fault can be subdivided into areas with high external influences and areas with authentic fault scarps, which have little or no external influences. This subdivision is necessary for long term throw-rate calculations, because without this differentiation the calculated rates would be misleading and the activity of a fault would be incorrectly assessed with significant implications for seismic hazard assessment since fault slip rate data govern the earthquake recurrence. Furthermore, by using this workflow areas with minimal external influences can be determined, not only for throw-rate calculations, but also for determining samples sites for absolute dating techniques such as cosmogenic nuclide dating. The main outcomes of this study include: i) there is no direct correlation between the fault's mean vertical displacement and dip (R² less than 0.31); ii) without subdividing the scanned scarp into areas with differing amounts of external influences, the along strike variability of vertical displacement is ±35%; iii) when the scanned scarp is subdivided the variation of the vertical displacement of the authentic scarp (exposed by earthquakes only) is in a range of ±6% (the varies depending on the fault from 7 to 12%); iv) the calculation of the long term throw-rate (since 13 ka) for four scarps in Crete using the authentic vertical displacement is 0.35 ± 0.04 mm/yr at Kastelli 1, 0.31 ± 0.01 mm/yr at Kastelli 2, 0.85 ± 0.06 mm/yr at the Asomatos fault (Sellia) and 0.55 ± 0.05 mm/yr at the Lastros fault.
Resumo:
A morphometric analysis was performed for the late Middle Miocene bivalve species lineage of Polititapes tricuspis (Eichwald, 1829) (Veneridae: Tapetini). Specimens from various localities grouped into two stratigraphically successive biozones, i.e. the upper Ervilia Zone and the Sarmatimactra Zone, were investigated using a multi-method approach. A Generalized Procrustes Analysis was computed for fifteen landmarks, covering characteristics of the hinge, muscle scars, and pallial line. The shell outline was separately quantified by applying the Fast Fourier Transform, which redraws the outline by fitting in a combination of trigonometric curves. Shell size was calculated as centroid size from the landmark configuration. Shell thickness, as not covered by either analysis, was additionally measured at the centroid. The analyses showed significant phenotypic differentiation between specimens from the two biozones. The bivalves become distinctly larger and thicker over geological time and develop circular shells with stronger cardinal teeth and a deeper pallial sinus. Data on the paleoenvironmental changes in the late Middle Miocene Central Paratethys Sea suggest the phenotypic shifts to be functional adaptations. The typical habitats for Polititapes changed to extensive, very shallow shores exposed to high wave action and tidal activity. Caused by the growing need for higher mechanical stability, the bivalves produced larger and thicker shells with stronger cardinal teeth. The latter are additionally shifted towards the hinge center to compensate for the lacking lateral teeth and improve stability. The deepening pallial sinus is related to a deeper burrowing habit, which is considered to impede being washed out in the new high-energy settings.
Resumo:
Ice shelves strongly impact coastal Antarctic sea-ice and the associated ecosystem through the formation of a sub-sea-ice platelet layer. Although progress has been made in determining and understanding its spatio-temporal variability based on point measurements, an investigation of this phenomenon on a larger scale remains a challenge due to logistical constraints and a lack of suitable methodology. In this study, we applied a laterally-constrained Marquardt-Levenberg inversion to a unique multi-frequency electromagnetic (EM) induction sounding dataset obtained on the landfast sea ice of Atka Bay, eastern Weddell Sea, in 2012. In addition to consistent fast-ice thickness and -conductivities along > 100 km transects; we present the first comprehensive, high resolution platelet-layer thickness and -conductivity dataset recorded on Antarctic sea ice. The reliability of the algorithm was confirmed by using synthetic data, and the inverted platelet-layer thicknesses agreed within the data uncertainty to drill-hole measurements. Ice-volume fractions were calculated from platelet-layer conductivities, revealing that an older and thicker platelet layer is denser and more compacted than a loosely attached, young platelet layer. The overall platelet-layer volume below Atka Bay fast ice suggests that the contribution of ocean/ice-shelf interaction to sea-ice volume in this region is even higher than previously thought. This study also implies that multi-frequency EM induction sounding is an effective approach in determining platelet layer volume on a larger scale than previously feasible. When applied to airborne multi-frequency EM, this method could provide a step towards an Antarctic-wide quantification of ocean/ice-shelf interaction.
Resumo:
George V Land (Antarctica) includes the boundary between Late Archean-Paleoproterozoic metamorphic terrains of the East Antarctic craton and the intrusive and metasedimentary rocks of the Early Paleozoic Ross-Delamerian Orogen. This therefore represents a key region for understanding the tectono-metamorphic evolution of the East Antarctic Craton and the Ross Orogen and for defining their structural relationship in East Antarctica, with potential implications for Gondwana reconstructions. In the East Antarctic Craton the outcrops closest to the Ross orogenic belt form the Mertz Shear Zone, a prominent ductile shear zone up to 5 km wide. Its deformation fabric includes a series of progressive, overprinting shear structures developed under different metamorphic conditions: from an early medium-P granulite-facies metamorphism, through amphibolite-facies to late greenschist-facies conditions. 40Ar-39Ar laserprobe data on biotite in mylonitic rocks from the Mertz Shear Zone indicate that the minimum age for ductile deformation under greenschist-facies conditions is 1502 ± 9 Ma and reveal no evidence of reactivation processes linked to the Ross Orogeny. 40Ar-39Ar laserprobe data on amphibole, although plagued by excess argon, suggest the presence of a ~1.7 Ga old phase of regional-scale retrogression under amphibolite-facies conditions. Results support the correlation between the East Antarctic Craton in the Mertz Glacier area and the Sleaford Complex of the Gawler Craton in southern Australia, and suggest that the Mertz Shear Zone may be considered a correlative of the Kalinjala Shear Zone. An erratic immature metasandstone collected east of Ninnis Glacier (~180 km east of the Mertz Glacier) and petrographically similar to metasedimentary rocks enclosed as xenoliths in Cambro-Ordovician granites cropping out along the western side of Ninnis Glacier, yielded detrital white-mica 40Ar-39Ar ages from ~530 to 640 Ma and a minimum age of 518 ± 5 Ma. This pattern compares remarkably well with those previously obtained for the Kanmantoo Group from the Adelaide Rift Complex of southern Australia, thereby suggesting that the segment of the Ross Orogen exposed east of the Mertz Glacier may represent a continuation of the eastern part of the Delamerian Orogen.
Resumo:
In 2014, UniDive (The University of Queensland Underwater Club) conducted an ecological assessment of the Point Lookout Dive sites for comparison with similar surveys conducted in 2001. Involvement in the project was voluntary. Members of UniDive who were marine experts conducted training for other club members who had no, or limited, experience in identifying marine organisms and mapping habitats. Since the 2001 detailed baseline study, no similar seasonal survey has been conducted. The 2014 data is particularly important given that numerous changes have taken place in relation to the management of, and potential impacts on, these reef sites. In 2009, Moreton Bay Marine Park was re-zoned, and Flat Rock was converted to a marine national park zone (Green zone) with no fishing or anchoring. In 2012, four permanent moorings were installed at Flat Rock. Additionally, the entire area was exposed to the potential effects of the 2011 and 2013 Queensland floods, including flood plumes which carried large quantities of sediment into Moreton Bay and surrounding waters. The population of South East Queensland has increased from 2.49 million in 2001 to 3.18 million in 2011 (BITRE, 2013). This rapidly expanding coastal population has increased the frequency and intensity of both commercial and recreational activities around Point Lookout dive sites (EPA 2008). Methodology used for the PLEA project was based on the 2001 survey protocols, Reef Check Australia protocols and Coral Watch methods. This hybrid methodology was used to monitor substrate and benthos, invertebrates, fish, and reef health impacts. Additional analyses were conducted with georeferenced photo transects. The PLEA marine surveys were conducted over six weekends in 2014 totaling 535 dives and 376 hours underwater. Two training weekends (February and March) were attended by 44 divers, whilst biological surveys were conducted on seasonal weekends (February, May, July and October). Three reefs were surveyed, with two semi-permanent transects at Flat Rock, two at Shag Rock, and one at Manta Ray Bommie. Each transect was sampled once every survey weekend, with the transect tapes deployed at a depth of 10 m below chart datum. Fish populations were assessed using a visual census along 3 x 20 m transects. Each transect was 5 m wide (2.5 m either side of the transect tape), 5 m high and 20 m in length. Fish families and species were chosen that are commonly targeted by recreational or commercial fishers, or targeted by aquarium collectors, and that were easily identified by their body shape. Rare or otherwise unusual species were also recorded. Target invertebrate populations were assessed using visual census along 3 x 20 m transects. Each transect was 5 m wide (2.5 m either side of the transect tape) and 20 m in length. The diver surveying invertebrates conducted a 'U-shaped' search pattern, covering 2.5 m on either side of the transect tape. Target impacts were assessed using a visual census along the 3 x 20 m transects. Each transect was 5 m wide (2.5 m either side of the transect tape) and 20 m in length. The transect was surveyed via a 'U-shaped' search pattern, covering 2.5 m on either side of the transect tape. Substrate surveys were conducted using the point sampling method, enabling percentage cover of substrate types and benthic organisms to be calculated. The substrate or benthos under the transect line was identified at 0.5m intervals, with a 5m gap between each of the three 20m segments. Categories recorded included various growth forms of hard and soft coral, key species/growth forms of algae, other living organisms (i.e. sponges), recently killed coral, and, non-living substrate types (i.e. bare rock, sand, rubble, silt/clay).
Resumo:
Working with subsistence whale hunters, we tagged 19 mostly immature bowhead whales (Balaena mysticetus) with satellite-linked transmitters between May 2006 and September 2008 and documented their movements in the Chukchi Sea from late August through December. From Point Barrow, Alaska, most whales moved west through the Chukchi Sea between 71° and 74° N latitude; nine whales crossed in six to nine days. Three whales returned to Point Barrow for 13 to 33 days, two after traveling 300 km west and one after traveling ~725 km west to Wrangel Island, Russia; two then crossed the Chukchi Sea again while the other was the only whale to travel south along the Alaskan side of the Chukchi Sea. Seven whales spent from one to 21 days near Wrangel Island before moving south to northern Chukotka. Whales spent an average of 59 days following the Chukotka coast southeastward. Kernel density analysis identified Point Barrow, Wrangel Island, and the northern coast of Chukotka as areas of greater use by bowhead whales that might be important for feeding. All whales traveled through a potential petroleum development area at least once. Most whales crossed the development area in less than a week; however, one whale remained there for 30 days.
Resumo:
Glacier thickness is an important factor in the course of glacier retreat in a warming climate. Thiese study data presents the results (point data) of GPR surveys on 66 Austrian mountain glaciers carried out between 1995 and 2014. The glacier areas range from 0.001 to 18.4 km**2, and their ice thickness has been surveyed with an average density of 36 points/km**2 . The glacier areas and surface elevations refer to the second Austrian glacier inventory (mapped between 1996 and 2002). According to the glacier state recorded in the second glacier inventory, the 64 glaciers cover an area of 223.3±3.6 km**3. Maps of glacier thickness have been calculated by Fischer and Kuhn (2013) with a mean thickness of 50±3 m and contain an glacier volume of 11.9±1.1 km**3. The mean maximum ice thickness is 119±5 m. The ice thickness measurements have been carried out with the transmitter of Narod and Clarke (1994) combined with restively loaded dipole antennas (Wu and King, 1965; Rose and Vickers, 1974) at central wavelengths of 6.5 (30 m antenna length) and 4.0 MHz (50 m antenna length). The signal was recorded trace by trace with an oscilloscope. 168 m/µs as used by Haeberli et al. (1982), Bauder (2001), and Narod and Clarke (1994), the signal velocity in air is assumed to be 300 m/µs. Details on the method can be are found in Fischer and Kuhn (2013), as well as Span et al. (2005) and Fischer et al. (2007).