970 resultados para Low 550 of 1999


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To design and validate a vision-specific quality-of-life assessment tool to be used in a clinical setting to evaluate low-vision rehabilitation strategy and management. METHODS: Previous vision-related questionnaires were assessed by low-vision rehabilitation professionals and patients for relevance and coverage. The 74 items selected were pretested to ensure correct interpretation. One hundred and fifty patients with low vision completed the chosen questions on four occasions to allow the selection of the most appropriate items. The vision-specific quality of life of patients with low vision was compared with that of 70 age-matched and gender-matched patients with normal vision and before and after low-vision rehabilitation in 278 patients. RESULTS: Items that were unreliable, internally inconsistent, redundant, or not relevant were excluded, resulting in the 25-item Low Vision Quality-of-Life Questionnaire (LVQOL). Completion of the LVQOL results in a summed score between 0 (a low quality of life) and 125 (a high quality of life). The LVQOL has a high internal consistency (α = 0.88) and good reliability (0.72). The average LVQOL score for a population with low vision (60.9 ± 25.1) was significantly lower than the average score of those with normal vision (100.3 ± 20.8). Rehabilitation improved the LVQOL score of those with low vision by an average of 6.8 ± 15.6 (17%). CONCLUSIONS: The LVQOL was shown to be an internally consistent, reliable, and fast method for measuring the vision-specific quality of life of the visually impaired in a clinical setting. It is able to quantify the quality of life of those with low vision and is useful in determining the effects of low-vision rehabilitation. Copyright (C) 2000 Elsevier Science Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM(S) To examine Primary Care Trust (PCT) demographics influencing general practitioner (GP) involvement in pharmacovigilance. METHODS PCT adverse drug reaction (ADR) reports to the Yellow Card scheme between April 2004 and March 2006 were obtained for the UK West Midlands region. Reports were analysed by all drugs, and most commonly reported drugs (‘top drugs’). PCT data, adjusted for population size, were aggregated. Prescribing statistics and other characteristics were obtained for each PCT, and associations between these characteristics and ADR reporting rates were examined. RESULTS During 2004–06, 1175 reports were received from PCTs. Two hundred and eighty (24%) of these reports were for 14 ‘top drugs’. The mean rate of reporting for PCTs was 213 reports per million population. A total of 153 million items were prescribed during 2004–06, of which 33% were ‘top drugs’. Reports for all drugs and ‘top drugs’ were inversely correlated with the number of prescriptions issued per thousand population (rs = -0.413, 95% CI -0.673, -0.062, P < 0.05, and r = -0.420, 95% CI -0.678, -0.071, P < 0.05, respectively). Reporting was significantly negatively correlated with the percentages of male GPs within a PCT, GPs over 55 years of age, single-handed GPs within a PCT, the average list size of a GP within a PCT, the overall deprivation scores and average QOF total points. ADR reports did not correlate significantly with the proportion of the population over 65 years old. CONCLUSIONS Some PCT characteristics appear to be associated with low levels of ADR reporting. The association of low prescribing areas with high ADR reporting rates replicates previous findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

At present there is not a reliable vaccine against herpes virus. Viral protein vaccines as yet have proved unsuccessful to meet the challenge of raising an appropriate immune response. Cantab Pharmaceuticals has produced a virus vaccine that can undergo one round of replication in the recipient in order to produce a more specific immune reaction. This virus is called Disabled Infectious Single Cycle Herpes Simplex Virus (DISC HSV) which has been derived by deleting the essential gH gene from a type 2 herpes virus. This vaccine has been proven to be effective in animal studies. Existing methods for the purification of viruses rely on laboratory techniques and for vaccine production would be on a far too small a scale. There is therefore a need for new virus purification methods to be developed in order to meet these large scale needs. An integrated process for the manufacture of a purified recombinant DISC HSV is described. The process involves culture of complementing Vero (CR2) cells, virus infection and manufacture, virus harvesting and subsequent downstream processing. The identification of suitable growth parameters for the complementing cell line and optimal limes for both infection and harvest are addressed. Various traditional harvest methods were investigated and found not to be suitable for a scaled up process. A method of harvesting, that exploits the elution of cell associated viruses by the competitive binding of exogenous heparin to virus envelope gC proteins, is described and is shown to yield significantly less contaminated process streams than sonication or osmotic approaches that involve cell rupture (with> 10-fold less complementing cell protein). High concentrations of salt (>0.8M NaCl) exhibit the same effect, although the high osmotic strength ruptures cells and increase the contamination of the process stream. This same heparin-gC protein affinity interaction is also shown to provide an efficient adsorptive purification procedure for herpes viruses which avoids the need to pre-treat the harvest material, apart from clarification, prior to chromatography. Subsequent column eluates provide product fractions with a 100-fold increase in virus titre and low levels of complementing cell protein and DNA (0.05 pg protein/pfu and 1.2 x 104 pg DNA/pfu respectively).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Practitioners and academics are in broad agreement that, above all, organizations need to be able to learn, to innovate and to question existing ways of working. This thesis develops a model to take into account, firstly, what determines whether or not organizations endorse practices designed to facilitate learning. Secondly, the model evaluates the impact of such practices upon organizational outcomes, measured in terms of products and technological innovation. Researchers have noted that organizations that are committed to producing innovation show great resilience in dealing with adverse business conditions (e.g. Pavitt, 1991; Leonard Barton, 1998). In effect, such organizations bear many of the characteristics associated with the achievement of ‘learning organization’ status (Garvin, 1993; Pedler, Burgoyne & Boydell, 1999; Senge, 1990). Seven studies are presented to support this theoretical framework. The first empirical study explores the antecedents to effective learning. The three following studies present data to suggest that people management practices are highly significant in determining whether or not organizations are able to produce sustained innovation. The thesis goes on to explore the relationship between organizational-level job satisfaction, learning and innovation, and provides evidence to suggest that there is a strong, positive relationship between these variables. The final two chapters analyze learning and innovation within two similar manufacturing organizations. One manifests relatively low levels of innovation whilst the other is generally considered to be outstandingly innovative. I present the comparative framework for exploring the different approaches to learning manifested by the two organizations. The thesis concludes by assessing the extent to which the theoretical model presented in the second chapter is borne out by the findings of the study. Whilst this is a relatively new field of inquiry, findings reveal that organizations have a much stronger chance of producing sustained innovation where they manage people proactively where people process themselves to be satisfied at work. Few studies to date have presented empirical evidence to substantiate theoretical endorsements to engage in higher order learning, so this research makes an important contribution to existing literature in this field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, the main source for the production of liquid transportation fuels is petroleum, the continued use of which faces many challenges including depleting oil reserves, significant oil price rises, and environmental concerns over global warming which is widely believed to be due to fossil fuel derived CO2 emissions and other greenhouse gases. In this respect, lignocellulosic or plant biomass is a particularly interesting resource as it is the only renewable source of organic carbon that can be converted into liquid transportation fuels. The gasification of biomass produces syngas which can then be converted into synthetic liquid hydrocarbon fuels by means of the Fischer-Tropsch (FT) synthesis. This process has been widely considered as an attractive option for producing clean liquid hydrocarbon fuels from biomass that have been identified as promising alternatives to conventional fossil fuels like diesel and kerosene. The resulting product composition in FT synthesis is influenced by the type of catalyst and the reaction conditions that are used in the process. One of the issues facing this conversion process is the development of a technology that can be scaled down to match the scattered nature of biomass resources, including lower operating pressures, without compromising liquid composition. The primary aims of this work were to experimentally explore FT synthesis at low pressures for the purpose of process down-scaling and cost reduction, and to investigate the potential for obtaining an intermediate FT synthetic crude liquid product that can be integrated into existing refineries under the range of process conditions employed. Two different fixed-bed micro-reactors were used for FT synthesis; a 2cm3 reactor at the University of Rio de Janeiro (UFRJ) and a 20cm3 reactor at Aston University. The experimental work firstly involved the selection of a suitable catalyst from three that were available. Secondly, a parameter study was carried out on the 20cm3 reactor using the selected catalyst to investigate the influence of reactor temperature, reactor pressure, space velocity, the H2/CO molar ratio in the feed syngas and catalyst loading on the reaction performance measured as CO conversion, catalyst stability, product distribution, product yields and liquid hydrocarbon product composition. From this parameter study a set of preferred operating conditions was identified for low pressure FT synthesis. The three catalysts were characterized using BET, XRD, TPR and SEM. The catalyst selected was an unpromoted Co/Al2O3 catalyst. FT synthesis runs on the 20cm3 reactor at Aston were conducted for 48 hours. Permanent gases and light hydrocarbons (C1-C5) were analysed in an online GC-TCD/FID at hourly intervals. The liquid hydrocarbons collected were analyzed offline using GC-MS for determination of fuel composition. The parameter study showed that CO conversion and liquid hydrocarbon yields increase with increasing reactor pressure up to around 8 bar, above which the effect of pressure is small. The parameters that had the most significant influence on CO conversion, product selectivity and liquid hydrocarbon yields were reactor temperature and catalyst loading. The preferred reaction conditions identified for this research were: T = 230ºC, P = 10 bar, H2/CO = 2.0, WHSV = 2.2 h-1, and catalyst loading = 2.0g. Operation in the low range of pressures studied resulted in low CO conversions and liquid hydrocarbon yields, indicating that low pressure BTL-FT operation may not be industrially viable as the trade off in lower CO conversions and once-through liquid hydrocarbon product yields has to be carefully weighed against the potential cost savings resulting from process operation at lower pressures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Soft ionization methods for the introduction of labile biomolecules into a mass spectrometer are of fundamental importance to biomolecular analysis. Previously, electrospray ionization (ESI) and matrix assisted laser desorption-ionization (MALDI) have been the main ionization methods used. Surface acoustic wave nebulization (SAWN) is a new technique that has been demonstrated to deposit less energy into ions upon ion formation and transfer for detection than other methods for sample introduction into a mass spectrometer (MS). Here we report the optimization and use of SAWN as a nebulization technique for the introduction of samples from a low flow of liquid, and the interfacing of SAWN with liquid chromatographic separation (LC) for the analysis of a protein digest. This demonstrates that SAWN can be a viable, low-energy alternative to ESI for the LC-MS analysis of proteomic samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this dissertation was to examine the form of the consumer satisfaction/dissatisfaction (CS/D) response to disconfirmation. In addition, the cognitive and affective processes underlying the response were also explored. ^ Respondents were provided with information from a prior market research study about a new brand of printer that was being tested. This market research information helped set prior expectations regarding the print quality. Subjects were randomly assigned to an experimental condition that manipulated prior expectations to be either positive or negative. Respondents were then provided with printouts that had performance quality that was either worse (negative disconfirmation) or better (positive disconfirmation) than the prior expectations. In other words, for each level of expectation, respondents were assigned to either positive or negative disconfirmation condition. Subjects were also randomly assigned to a condition of either a high or low level of outcome involvement. ^ Analyses of variance indicated that positive disconfirmation led to a more intense CS/D response than negative disconfirmation, even though there was no significant difference in the intensity for positive and negative disconfirmation. Intensity of CS/D was measured by the distance of the CS/D rating from the midpoint of the scale. The study also found that although outcome involvement did not influence the polarity of the CS/D response, the more direct measures of processing involvement such as the subjects' concentration, attention and care in evaluating the printout did have a significant positive effect on CS/D intensity. ^ Analyses of covariance also indicated that the relationship between the intensity of the CS/D response and the intensity of the disconfirmation was mediated by the intensity of affective responses. Positive disconfirmation led to more intense affective responses than negative disconfirmation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous results in our laboratory suggest that the (CG) 4 segments whether present in a right-handed or a left-handed conformation form distinctive junctions with adjacent random sequences. These junctions and their associated sequences have unique structural and thermodynamic properties that may be recognized by DNA-binding molecules. This study probes these sequences by using the following small ligands: actinomycin D, 1,4-bis(((di(aminoethyl)amino)ethyl)amino)anthracene-9,10-dione, ametantrone, and tris(phenanthroline)ruthenium (II). These ligands may recognize the distinctive features associated to the (CG)4 segment and its junctions and thus interact preferentially near these sequences. Restriction enzyme inhibition assays were used to determine whether or not binding interactions took place, and to approximate locations of these interactions. These binding studies are first carried out using two small synthetic oligomers BZ-III and BZ-IV. The (5meCG)4 segment present in BZ-III adopts the Z-conformation in the presence of 50 m M Co(NH3)63+. In BZ-IV, the unmethylated (CG)4 segment changes to a non-B conformation in the presence of 50 m M Co(NH3)63+. BZ-IV, containing the (CG)4 segment, was inserted into a clone plasmid then digested with the restriction enzyme Hinf I to produce a larger fragment that contains the (CG)4 segment. The results obtained on the small oligomers and on the larger fragment for restriction enzyme Mbo I indicate that 1,4-bis(((di(aminoethyl)amino)ethyl)amino)anthracene-9,10-dione binds more efficiently at or near the (CG)4 segment. Restriction enzymes EcoRV, Sac I and Not I with cleavage sites upstream and downstream of the (CG)4 insert were used to further localize binding interactions in the vicinity of the (CG)4 insert. RNA polymerase activity was studied in a plasmid which contained the (CG)4 insert downstream from the promoter sites of SP6 and T7 RNA polymerases. Activities of these two polymerases were studied in the presence of each one of the ligands used throughout the study. Only actinomycin D and spider, which bind at or near the (CG)4 segment, alter the activities of SP6 and T7 RNA polymerases. Surprisingly, enhancement of polymerase activity was observed in the presence of very low concentrations of actinomycin D. These results suggest that the conformational features of (CG) segments may serve in regulatory functions of DNA. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advantages and popularity of Permanent Magnet (PM) motors due to their high power density, there is an increasing incentive to use them in variety of applications including electric actuation. These applications have strict noise emission standards. The generation of audible noise and associated vibration modes are characteristics of all electric motors, it is especially problematic in low speed sensorless control rotary actuation applications using high frequency voltage injection technique. This dissertation is aimed at solving the problem of optimizing the sensorless control algorithm for low noise and vibration while achieving at least 12 bit absolute accuracy for speed and position control. The low speed sensorless algorithm is simulated using an improved Phase Variable Model, developed and implemented in a hardware-in-the-loop prototyping environment. Two experimental testbeds were developed and built to test and verify the algorithm in real time.^ A neural network based modeling approach was used to predict the audible noise due to the high frequency injected carrier signal. This model was created based on noise measurements in an especially built chamber. The developed noise model is then integrated into the high frequency based sensorless control scheme so that appropriate tradeoffs and mitigation techniques can be devised. This will improve the position estimation and control performance while keeping the noise below a certain level. Genetic algorithms were used for including the noise optimization parameters into the developed control algorithm.^ A novel wavelet based filtering approach was proposed in this dissertation for the sensorless control algorithm at low speed. This novel filter was capable of extracting the position information at low values of injection voltage where conventional filters fail. This filtering approach can be used in practice to reduce the injected voltage in sensorless control algorithm resulting in significant reduction of noise and vibration.^ Online optimization of sensorless position estimation algorithm was performed to reduce vibration and to improve the position estimation performance. The results obtained are important and represent original contributions that can be helpful in choosing optimal parameters for sensorless control algorithm in many practical applications.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study was designed to explore the salience of parent and peer support in middle childhood and early adolescence across two time periods as indicated by measures of achievement (grade point average (GPA), Stanford Achievement Test (SAT) scores and teacher rated school adaptation) and well-being (loneliness, depression, self-concept and teacher-rated internalizing behaviors). ^ Participants were part of an initial study on social network relations and school adaptation in middle childhood and early adolescence. Participants at Time 1 (in the spring of 1997) included 782 children in grades 4 and 6 of eight lower and middle income public elementary schools. Participants (N = 694) were reinterviewed two years later in the spring of 1999 (Time 2). ^ Multivariate analyses of variance (MANOVA) were used to investigate the change in salience of parent and peer support from Time 1 to Time 2. In addition, Tukey-HSD (Honestly Significant Difference) post hoc tests were used to test the significance of the differences among the means of four support categories: (1) low parent-low friend, (2) low parent-high friend, (3) high parent-low friend, and (4) high parent-high friend. ^ Compensatory effects were observed for loneliness and self-concept at Time 1, as well as for SAT scores, self-concept and overall achievement at Time 2. Results were consistent with existing findings that suggest a competitive model of parent/peer influence on achievement during adolescence. This study affirms the need for a more contextual approach to research examining competing and compensatory effects on adolescent development. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The oceanic carbon cycle mainly comprises the production and dissolution/ preservation of carbonate particles in the water column or within the sediment. Carbon dioxide is one of the major controlling factors for the production and dissolution of carbonate. There is a steady exchange between the ocean and atmosphere in order to achieve an equilibrium of CO2; an anthropogenic rise of CO2 in the atmosphere would therefore also increase the amount of CO2 in the ocean. The increased amount of CO2 in the ocean, due to increasing CO2-emissions into the atmosphere since the industrial revolution, has been interpreted as "ocean acidification" (Caldeira and Wickett, 2003). Its alarming effects, such as dissolution and reduced CaCO3 formation, on reefs and other carbonate shell producing organisms form the topic of current discussions (Kolbert, 2006). Decreasing temperatures and increasing pressure and CO2 enhance the dissolution of carbonate particles at the sediment-water interface in the deep sea. Moreover, dissolution processes are dependent of the saturation state of the surrounding water with respect to calcite or aragonite. Significantly increased dissolution has been observed below the aragonite or calcite chemical lysocline; below the aragonite compensation depth (ACD), or calcite compensation depth (CCD), all aragonite or calcite particles, respectively, are dissolved. Aragonite, which is more prone to dissolution than calcite, features a shallower lysocline and compensation depth than calcite. In the 1980's it was suggested that significant dissolution also occurs in the water column or at the sediment-water interface above the lysocline. Unknown quantities of carbonate produced at the sea surface, would be dissolved due to this process. This would affect the calculation of the carbonate production and the entire carbonate budget of the world's ocean. Following this assumption, a number of studies have been carried out to monitor supralysoclinal dissolution at various locations: at Ceara Rise in the western equatorial Atlantic (Martin and Sayles, 1996), in the Arabian Sea (Milliman et al., 1999), in the equatorial Indian Ocean (Peterson and Prell, 1985; Schulte and Bard, 2003), and in the equatorial Pacific (Kimoto et al., 2003). Despite the evidence for supralysoclinal dissolution in some areas of the world's ocean, the question still exists whether dissolution occurs above the lysocline in the entire ocean. The first part of this thesis seeks answers to this question, based on the global budget model of Milliman et al. (1999). As study area the Bahamas and Florida Straits are most suitable because of the high production of carbonate, and because there the depth of the lysocline is the deepest worldwide. To monitor the occurrence of supralysoclinal dissolution, the preservation of aragonitic pteropod shells was determined, using the Limacina inflata Dissolution Index (LDX; Gerhardt and Henrich, 2001). Analyses of the grain-size distribution, the mineralogy, and the foraminifera assemblage revealed further aspects concerning the preservation state of the sediment. All samples located at the Bahamian platform are well preserved. In contrast, the samples from the Florida Straits show dissolution in 800 to 1000 m and below 1500 m water depth. Degradation of organic material and the subsequent release of CO2 probably causes supralysoclinal dissolution. A northward extension of the corrosive Antarctic Intermediate Water (AAIW) flows through the Caribbean Sea into the Gulf of Mexico and might enhance dissolution processes at around 1000 m water depth. The second part of this study deals with the preservation of Pliocene to Holocene carbonate sediments from both the windward and leeward basins adjacent to Great Bahama Bank (Ocean Drilling Program Sites 632, 633, and 1006). Detailed census counts of the sand fraction (250-500 µm) show the general composition of the coarse grained sediment. Further methods used to examine the preservation state of carbonates include the amount of organic carbon and various dissolution indices, such as the LDX and the Fragmentation Index. Carbonate concretions (nodules) have been observed in the sand fraction. They are similar to the concretions or aggregates previously mentioned by Mullins et al. (1980a) and Droxler et al. (1988a), respectively. Nonetheless, a detailed study of such grains has not been made to date, although they form an important part of periplatform sediments. Stable isotopemeasurements of the nodules' matrix confirm previous suggestions that the nodules have formed in situ as a result of early diagenetic processes (Mullins et al., 1980a). The two cores, which are located in Exuma Sound (Sites 632 and 633), at the eastern margin of Great Bahama Bank (GBB), show an increasing amount of nodules with increasing core depth. In Pliocene sediments, the amount of nodules might rise up to 100%. In contrast, nodules only occur within glacial stages in the deeper part of the studied core interval (between 30 and 70 mbsf) at Site 1006 on the western margin of GBB. Above this level the sediment is constantly being flushed by bottom water, that might also contain corrosive AAIW, which would hinder cementation. Fine carbonate particles (<63 µm) form the matrix of the nodules and do therefore not contribute to the fine fraction. At the same time, the amount of the coarse fraction (>63 µm) increases due to the nodule formation. The formation of nodules might therefore significantly alter the grain-size distribution of the sediment. A direct comparison of the amount of nodules with the grain-size distribution shows that core intervals with high amounts of nodules are indeed coarser than the intervals with low amounts of nodules. On the other hand, an initially coarser sediment might facilitate the formation of nodules, as a high porosity and permeability enhances early diagenetic processes (Westphal et al., 1999). This suggestion was also confirmed: the glacial intervals at Site 1006 are interpreted to have already been rather coarse prior to the formation of nodules. This assumption is based on the grain-size distribution in the upper part of the core, which is not yet affected by diagenesis, but also shows coarser sediment during the glacial stages. As expected, the coarser, glacial deposits in the lower part of the core show the highest amounts of nodules. The same effect was observed at Site 632, where turbidites cause distinct coarse layers and reveal higher amounts of nodules than non-turbiditic sequences. Site 633 shows a different pattern: both the amount of nodules and the coarseness of the sediment steadily increase with increasing core depth. Based on these sedimentological findings, the following model has been developed: a grain-size pattern characterised by prominent coarse peaks (as observed at Sites 632 and 1006) is barely altered. The greatest coarsening effect due to the nodule formation will occur in those layers, which have initially been coarser than the adjacent sediment intervals. In this case, the overall trend of the grain-size pattern before and after formation of the nodules is similar to each other. Although the sediment is altered due to diagenetic processes, grain size could be used as a proxy for e.g. changes in the bottom-water current. The other case described in the model is based on a consistent initial grain-size distribution, as observed at Site 633. In this case, the nodule reflects the increasing diagenetic alteration with increasing core depth rather than the initial grain-size pattern. In the latter scenario, the overall grain-size trend is significantly changed which makes grain size unreliable as a proxy for any palaeoenvironmental changes. The results of this study contribute to the understanding of general sedimentation processes in the periplatform realm: the preservation state of surface samples shows the influence of supralysoclinal dissolution due to the degradation of organic matter and due to the presence of corrosive water masses; the composition of the sand fraction shows the alteration of the carbonate sediment due to early diagenetic processes. However, open questions are how and when the alteration processes occur and how geochemical parameters, such as the rise in alkalinity or the amount of strontium, are linked to them. These geochemical parameters might reveal more information about the depth in the sediment column, where dissolution and cementation processes occur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The compositional record of the AND-2A drillcore is examined using petrological, sedimentological, volcanological and geochemical analysis of clasts, sediments and pore waters. Preliminary investigations of basement clasts (granitoids and metasediments) indicate both local and distal sources corresponding to variable ice-volume and ice-flow directions. Low abundance of sedimentary clasts (e.g., arkose, litharenite) suggests reduced contributions from sedimentary covers while intraclasts (e.g., diamictite, conglomerate) attest to intrabasinal reworking. Volcanic material includes pyroclasts (e.g., pumice, scoria), sediments and lava. Primary and reworked tephra layers occur within the Early Miocene interval (1093 to 640 metres below sea floor mbsf). The compositions of volcanic clasts reveal a diversity of alkaline types derived from the McMurdo Volcanic Group. Finer-grained sediments (e.g., sandstone, siltstone) show increases in biogenic silica and volcanic glass from 230 to 780 mbsf and higher proportions of terrigenous material c. 350 to 750 mbsf and below 970 mbsf. Basement clast assemblages suggest a dominant provenance from the Skelton Glacier - Darwin Glacier area and from the Ferrar Glacier - Koettlitz Glacier area. Provenance of sand grains is consistent with clast sources. Thirteen Geochemical Units are established based on compositional trends derived from continuous XRF scanning. High values of Fe and Ti indicate terrigenous and volcanic sources, whereas high Ca values signify either biogenic or diagenic sources. Highly alkaline and saline pore waters were produced by chemical exchange with glass at moderately elevated temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Focussing on heavy-mineral associations in the Laptev-Sea continental margin area and the eastern Arctic Ocean, 129 surface sediment samples, two short and four long gravity cores have been studied. By means of the accessory components, heavy-mineral associations of surface sediment samples from the Laptev-See continental slope allowed the distinction into two different mineralogical provinces, each influenced by fluvial input of the Siberian river Systems. Transport pathways via sea ice from the shallow shelf areas into the Arctic Ocean up to the final ablation areas of the Fram Strait can be reconstructed by heavy-mineral data of surface sediments from the central Arctic Ocean. The shallow shelf of the Laptev Sea seems to be the most important source area for terrigenous material, as indicated by the abundant occurence of amphiboles and clinopyroxenes. Underneath the mixing Zone of the two dominating surface circulation Systems, the Beaufort- Gyre and Transpolar-Drift system, the imprint of the Amerasian shelf regions up to the Fram Strait is detectable because of a characteristical heavy-mineral association dominated by detrital carbonate and opaque minerals. Based On heavy-mineral characteristics of the potential circum-Arctic source areas, sea-ice drift, origin and distribution of ice-rafted material can be reconstructed during the past climatic cycles. Different factors controlling the transport of terrigenous material into the Arctic Ocean. The entrainment of particulate matter is triggered by the sea level, which flooded during highs and lows different regions resulting in the incorporation of sediment from different source areas into the sea ice. Additionally, the fluvial input even at low stands of sea level is responsible for the delivery of material of distinct sources for entrainment into the sea ice. Glacials and interglacials of climate cycles of the last 780 000 years left a characteristical signal in the central Arctic Ocean sediments caused by the ice- rafted material from different sources in the circum-Arctic regions and its change through time. Changes in the heavy-mineral association from an amphibole-dominated into a garnet-epidote-assemblage can be related to climate-related changes in source areas and directions of geostrophic winds, the dominating drive of the sea-ice drift. During Marine Isotope Stage (MIS) 6, the central Arctic Ocean is marked by an heavy-mineral signal, which occurs in recent sediments of the eastern Kara Sea. Its characteristics are high amounts of epidote, garnet and apatite. On the other hand, during the Same time interval a continuous record of Laptev Sea sediments is documented with high contents of amphiboles on the Lomonosov Ridge near the Laptev Sea continental slope. A nearly similar Pattern was detected in MIS 5 and 4. Small-scale glaciations in the Putorana-mountains and the Anabar-shield may have caused changes in the drainage area of the rivers and therefore a change in fluvial input. During MIS 3, the heavy-mineral association of central Arctic sediments show similar patterns than the Holocene mineral assemblage which consists of amphiboles, ortho- and clinopyroxenes with a Laptev Sea source. These minerals are indicating a stable Transpolar-Drift system similar to recent conditions. An extended influence of the Beaufort Gyre is only recognized, when sediment material from the Amerasian shelf areas reached the core location PS2757-718 during Termination Ib. Based On heavy-mineral data from Laptev-Sea continental slope Core PS2458-4 the paleo-sea-ice drift in the Laptev Sea during 14.000 years was reconstructed. During Holocene sea-level rise, the bathymetrically deeper parts of the Western shelf were flooded first. At the beginning of the Atlantic stage, nearly the entire shelf was marine influenced by fully marine conditions and the recent surface circulation was established.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recreational fisheries in North America are valued between $47.3 billion and $56.8 billion. Fisheries managers must make strategic decisions based on sound science and knowledge of population ecology, to effectively conserve populations. Competitive fishing, in the form of tournaments, has become an important part of recreational fisheries, and is common on large waterbodies including the Great Lakes. Black Bass, Micropterus spp., are top predators and among the most sought after species in competitive catch-and-release tournaments. This study investigated catch-and-release tournaments as an assessment tool through mark-recapture for Largemouth Bass (>305mm) populations in the Tri Lakes, and Bay of Quinte, part of the eastern basin of Lake Ontario. The population in the Tri Lakes (1999-2002) was estimated to be stable between 21,928-29,780, and the population in the Bay of Quinte (2012-2015) was estimated to be between 31,825-54,029 fish. Survival in the Tri Lakes varied throughout the study period, from 31%-54%; while survival in the Bay of Quinte remained stable at 63%. Differences in survival may be due to differences in fishing pressure, as 34-46% of the Largemouth Bass population on the Tri Lakes is harvested annually and only 19% of catch was attributed to tournament angling. Many biological issues still surround catch-and-release tournaments, particularly concerning displacement from initial capture sites. In the past, the majority of studies have focused on small inland lakes and coastal areas, displacing bass relatively short distances. My study displaced Largemouth and Smallmouth Bass up to 100km, and found very low rates of return; only 1 of 18 Largemouth Bass returned 15 km and 1 of 18 Smallmouth Bass returned 135 km. Both species remained near the release sites for an average of approximately 2 weeks prior to dispersing. Tournament organizers should consider the use of satellite release locations to facilitate dispersal and prevent stockpiling at the release site. Catch-and-release tournaments proved to be a valuable tool in assessing population variables and the effects of long distance displacement through the use of mark recapture and acoustic telemetry on large lake systems.