961 resultados para counting


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of S-10, a strain of marine bacteria isolated from sediment in the Western Xiamen Sea, on the growth and paralytic shellfish poison (PSP) production in the alga Alexandrium tamarense (A. tamarense) was studied under controlled experimental conditions. The results of these experiments have shown that the growth of A. tamarense is obviously inhibited by S-10 at high concentrations, however no evident effect on its growth was observed at low concentrations. Its PSP production was also inhibited by S 10 at different concentrations, especially at low concentrations. The toxicity of this strain of A. tamarense is about (0.9512.14) x 10(-6) MU/cell, a peak toxicity value of 12.14 x 10(-6) MU/cell appeared on the 14th day, after which levels decreased gradually. The alga grew well in conditions of pH 6-8 and salinities of 20-34 parts per thousand. The toxicity of the alga varied markedly at different pH and salinity levels. Toxicity decreased as pH increased, while it increased with salinity and reached a peak value at a salinity of 30 parts per thousand, after which it declined gradually. S-10 at a concentration of 1.02 x 10(9) cells/ml inhibited growth and the PSP production of A. tamarense at different pH and salinity levels. S-10 had the strongest inhibitory function on the growth of A. tamarense under conditions of pH 7 and a salinity of 34 parts per thousand. The best inhibitory effect on PSP production by A. tamarense was at pH 7, this inhibitory effect on PSP production did not relate to salinity. Interactions between marine bacteria and A. tamarense were also investigated using the flow cytometer technique (FCM) as well as direct microscope counting. S-10 was identitied as being a member of the genus Bacillus, the difference in 16S rDNA between S-10 and Bacillus halmapalus was only 2%. The mechanism involved in the inhibition of growth and PSP production of A. tamarense by this strain of marine bacteria, and the prospect of using it and other marine bacteria in the biocontrol of red-tides was discussed. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The alkenone unsaturation paleothermometer is an important proxy to reconstruct water temperature, and is widely applied to reconstructing sea surface temperature in most oceanographic settings. Recent research indicates that long chain alkenone is preserved in lacustrine sediments, and the alkenone unsaturation has good relationship with mean annual temperature in studied lakes. Thus, the alkenone unsaturation could be used as a temperature proxy to reconstruct temperature in limnic systems. In this study, we analyzed long chain alkenone from the varved sediments in Lake Sihailongwan, northeastern China. Based on the counting varves, we established time scale during the past 1500 years. The distribution pattern in the sediment is similar with the previous study in lacustrine environment. The ratio of C37:4 methyl ketone to the sum of C37 alkenones is high. Based on the published temperature- alkenone unsaturation equation, we reconstructed the mean air temperature and July water temperature during the past 1500 years. Three major cold periods are in AD560-950, AD 1540-1600 and AD1800-1920. Three major warm periods are AD450-550, AD 950-1400 and AD 1600-1800. The Medieval Warm Period was a significant warm periods. However, the traditional “Little Ice Age” was not a persistent cold period, and interrupted by relative longer warm period. The temperature variations in this study show a general similar pattern with the summer temperature reconstruction from Shihua Cave and the winter temperature from historical documents. The temperature variations from long chain alknone record show a good agreement with solar activity (10Be data from ice core and sunspot number from tree rings). It may suggest that solar activity is most important forcing in the studied area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main research projects reported in this paper are the establishment of a luminescence (OSL/TL) dating laboratory in The Institute of Geology and Geophysics, CAS, and studies on OSL dating technique and protocol of sediments from North China. These projects have been suggested in order to fit in with the needs of research developments in environmental changes, in particular the aridity and desertification in North China. A new luminescence dating laboratory in which there are a Rise TL/OSL-DA-15B/C reader with Sr-90 beta source, a set of Little More Tape 9022 alpha and beta irradiators, three set of Daybreak 583 intelligent alpha counters and sample preparation system has been set up in the Institute in June 2001. The courses of the establishment of a new laboratory involved a series of technical works, besides making a suitable choice of the equipment, as follows: installing and testing TL/OSL reader, calibrating the dose rate of the beta and alpha sources in the irradiators with the standard sources, testing and calibrating the count rates of the thick source alpha counting in the alpha counters with a standard sample, and then dating of the know age samples to check and examine the OSL/TL dating system. All data obtained from above calibrations and tests show that the established OSL/TL system, including the used equipment in it, can be used to determine age of the geological and archaeological samples with an error of equivalent dose (De) of less than 5%. The OSL dates of several sediment samples obtained from the system are good agreement with those from the OSL dating laboratory in Hong Kong University and ~(14)C dates within 1 - 2 standard deviations. The studies on OSL dating technique and protocol of sediment samples being in progress involve the De determinations with single aliquot regeneration (SAR) (Murray and Wintle, 2000) of the coarse grain quartz from sand dune samples and comparison of the De determinations obtained from SAR with those measured by using multiple aliquot regeneration of loess fine grains. The preliminary results from these research works are shown as follows. The very low natural equivalent dose (De) of about 0.012 - 0.03 Gy, corresponding age of less than 10 years, for BLSL (blue light stimulated luminescence) of the coarse grain quartz from modern sand dune samples in Horqin sand fields has been determined with both the SAR and multiple aliquot regeneration (MAR) techniques. This imply that the BLSL signal zeroing of the quartz could be reached before burying of the sand in Horqin sand fields. The De values and ages of the coarse grain quartz measured with SAR protocol are in good agreement with those obtained from multiple aliquot technique for the modern sand dune samples, but the errors of De from the MAR is greater than those from the SAR. This may imply that the higher precision of age determination for younger sand dune samples could be achieved with the SAR of coarse grain quartz. The MAR combining with "Australian Slide method" may be a perfect choice for De measurements of loess fine grain samples on the basis of analysis of De values obtained from the SAR and from the MAR. The former can be employed to obtain a reliable age estimate of loess sample as older as approximately SO ka BR There is a great difference between De determinations from the (post-IR) OSL of the SAR (Roberts and Wintle, 2001) and those from independent or expected estimates for the older samples. However, the age estimates obtained from the (post-IR) OSL of the SAR are mostly closed to the independent age determinations for the younger (age less than 10 ka) fine grain samples. It may be suggested that the (post-IR) OSL of the SAR protocol of the fine grain fraction would be a suitable choice to dating of the younger samples, but may be unsuitable for the older samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this article is to present the results of a study aimed at determining, classifying and evaluating practices of interest for general competency development and assessment in undergraduate programmes. The study encompassed the following phases: (1) focus group in order to establish a starting point regarding competency development and assessment, counting on the opinion of some of the best-rated teachers belonging to the participating universities; (2) collection of best practices; (3) design and validation of a scale for the assessment of best practices; and (4) scale administration (evaluation of good practices) and data analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is shown that determining whether a quantum computation has a non-zero probability of accepting is at least as hard as the polynomial time hierarchy. This hardness result also applies to determining in general whether a given quantum basis state appears with nonzero amplitude in a superposition, or whether a given quantum bit has positive expectation value at the end of a quantum computation. This result is achieved by showing that the complexity class NQP of Adleman, Demarrais, and Huang, a quantum analog of NP, is equal to the counting class coC=P.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distribution of soft sediment benthic fauna and the environmental factors affecting them were studied, to investigate changes across spatial and temporal scales. Investigations took place at Lough Hyne Marine Reserve using a range of methods. Data on the sedimentation rates of organic and inorganic matter were collected at monthly intervals for one year at a number of sites around the Lough, by use of vertical midwater-column sediment traps. Sedimentation of these two fractions were not coupled; inorganic matter sedimentation depended on hydrodynamic and weather factors, while the organic matter sedimentation was more complex, being dependent on biological and chemical processes in the water column. The effects of regular hypoxic episodes on benthic fauna due to a natural seasonal thermocline were studied in the deep Western Trough, using camera-equipped remotely-operated vehicle to follow transects, on a three-monthly basis over one year. In late summer, the area below the thermocline of the Western Trough was devoid of visible fauna. Decapod crustaceans were the first taxon to make use of ameliorating oxygen conditions in autumn, by darting below the thermocline depth, most likely to scavenge. This was indicated by tracks that they left on the surface of the Trough floor. Some species, most noticeably Fries’ goby Lesueurigobius friesii, migrated below the thermocline depth when conditions were normoxic and established semi-permanent burrows. Their population encompassed all size classes, indicating that this habitat was not limited to juveniles of this territorial species. Recolonisation by macrofauna and burrowing megafauna was studied during normoxic conditions, from November 2009 to May 2010. Macrofauna displayed a typical post-disturbance pattern of recolonisation with one species, the polychaete Scalibregma inflatum, occurring at high abundance levels in March 2010. In May, this population had become significantly reduced and a more diverse community was established. The abundance of burrowing infauna comprising decapods crabs and Fries’ gobies, was estimated by identifying and counting their distinctive burrow structures. While above the summer thermocline depth, burrow abundance increased in a linear fashion, below the thermocline depth a slight reduction of burrow abundance occurred in May, when oxygen conditions deteriorated again. The majority of the burrows occurring in May were made by Fries’ gobies, which are thought to encounter low oxygen concentrations in their burrows. Reduction in burrow abundance of burrowing shrimps Calocaris macandreae and Callianassa subterranea (based on descriptions of burrow structures from the literature), from March to May, might be related to their reduced activity in hypoxia, leading to loss of structural burrow maintenance. Spatial and temporal changes to macrofaunal assemblage structures were studied seasonally for one year across 5 sites in the Lough and subject to multivariate statistical analysis. Assemblage structures were significantly correlated with organic matter levels in the sediment, the amounts of organic matter settling out of the water column one month before macrofaunal sampling took place as well as current speed and temperature. This study was the first to investigate patterns and processes in the Lough soft sediment ecology across all 3 basins on a temporal and spatial scale. An investigation into the oceanographic aspects of the development, behaviour and break-down of the summer thermocline of Lough Hyne was performed in collaboration with researchers from other Irish institutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Avalanche Photodiodes (APDs) have been used in a wide range of low light sensing applications such as DNA sequencing, quantum key distribution, LIDAR and medical imaging. To operate the APDs, control circuits are required to achieve the desired performance characteristics. This thesis presents the work on development of three control circuits including a bias circuit, an active quench and reset circuit and a gain control circuit all of which are used for control and performance enhancement of the APDs. The bias circuit designed is used to bias planar APDs for operation in both linear and Geiger modes. The circuit is based on a dual charge pumps configuration and operates from a 5 V supply. It is capable of providing milliamp load currents for shallow-junction planar APDs that operate up to 40 V. With novel voltage regulators, the bias voltage provided by the circuit can be accurately controlled and easily adjusted by the end user. The circuit is highly integrable and provides an attractive solution for applications requiring a compact integrated APD device. The active quench and reset circuit is designed for APDs that operate in Geiger-mode and are required for photon counting. The circuit enables linear changes in the hold-off time of the Geiger-mode APD (GM-APD) from several nanoseconds to microseconds with a stable setting step of 6.5 ns. This facilitates setting the optimal `afterpulse-free' hold-off time for any GM-APD via user-controlled digital inputs. In addition this circuit doesn’t require an additional monostable or pulse generator to reset the detector, thus simplifying the circuit. Compared to existing solutions, this circuit provides more accurate and simpler control of the hold-off time while maintaining a comparable maximum count-rate of 35.2 Mcounts/s. The third circuit designed is a gain control circuit. This circuit is based on the idea of using two matched APDs to set and stabilize the gain. The circuit can provide high bias voltage for operating the planar APD, precisely set the APD’s gain (with the errors of less than 3%) and compensate for the changes in the temperature to maintain a more stable gain. The circuit operates without the need for external temperature sensing and control electronics thus lowering the system cost and complexity. It also provides a simpler and more compact solution compared to previous designs. The three circuits designed in this project were developed independently of each other and are used for improving different performance characteristics of the APD. Further research on the combination of the three circuits will produce a more compact APD-based solution for a wide range of applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continuous plankton recorder (CPR) survey is the largest multi-decadal plankton monitoring programme in the world. It was initiated in 1931 and by the end of 2004 had counted 207,619 samples and identified 437 phyto- and zooplankton taxa throughout the North Atlantic. CPR data are used extensively by the research community and in recent years have been used increasingly to underpin marine management. Here, we take a critical look at how best to use CPR data. We first describe the CPR itself, CPR sampling, and plankton counting procedures. We discuss the spatial and temporal biases in the Survey, summarise environmental data that have not previously been available, and describe the new data access policy. We supply information essential to using CPR data, including descriptions of each CPR taxonomic entity, the idiosyncrasies associated with counting many of the taxa, the logic behind taxonomic changes in the Survey, the semi-quantitative nature of CPR sampling, and recommendations on choosing the spatial and temporal scale of study. This forms the basis for a broader discussion on how to use CPR data for deriving ecologically meaningful indices based on size, functional groups and biomass that can be used to support research and management. This contribution should be useful for plankton ecologists, modellers and policy makers that actively use CPR data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The US National Oceanic and Atmospheric Administration (NOAA) Fisheries Continuous Plankton Recorder (CPR) Survey has sampled four routes: Boston–Nova Scotia (1961–present), New York toward Bermuda (1976–present), Narragansett Bay–Mount Hope Bay–Rhode Island Sound (1998–present) and eastward of Chesapeake Bay (1974–1980). NOAA involvement began in 1974 when it assumed responsibility for the existing Boston–Nova Scotia route from what is now the UK's Sir Alister Hardy Foundation for Ocean Science (SAHFOS). Training, equipment and computer software were provided by SAHFOS to ensure continuity for this and standard protocols for any new routes. Data for the first 14 years of this route were provided to NOAA by SAHFOS. Comparison of collection methods; sample processing; and sample identification, staging and counting techniques revealed near-consistency between NOAA and SAHFOS. One departure involved phytoplankton counting standards. This has since been addressed and the data corrected. Within- and between-survey taxonomic and life-stage names and their consistency through time were, and continue to be, an issue. For this, a cross-reference table has been generated that contains the SAHFOS taxonomic code, NOAA taxonomic code, NOAA life-stage code, National Oceanographic Data Center (NODC) taxonomic code, Integrated Taxonomic Information System (ITIS) serial number and authority and consistent use/route. This table is available for review/use by other CPR surveys. Details of the NOAA and SAHFOS comparison and analytical techniques unique to NOAA are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phytoplankton observation is the product of a number of trade-offs related to sampling processes, required level of diversity and size spectrum analysis capabilities of the techniques involved. Instruments combining the morphological and high-frequency analysis for phytoplankton cells are now available. This paper presents an application of the automated high-resolution flow cytometer Cytosub as a tool for analysing phytoplanktonic cells in their natural environment. High resolution data from a temporal study in the Bay of Marseille (analysis every 30 min over 1 month) and a spatial study in the Southern Indian Ocean (analysis every 5 min at 10 knots over 5 days) are presented to illustrate the capabilities and limitations of the instrument. Automated high-frequency flow cytometry revealed the spatial and temporal variability of phytoplankton in the size range 1−∼50 μm that could not be resolved otherwise. Due to some limitations (instrumental memory, volume analysed per sample), recorded counts could be statistically too low. By combining high-frequency consecutive samples, it is possible to decrease the counting error, following Poisson’s law, and to retain the main features of phytoplankton variability. With this technique, the analysis of phytoplankton variability combines adequate sampling frequency and effective monitoring of community changes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is abundant empirical evidence on the negative relationship between welfare effort and poverty. However, poverty indicators traditionally used have been representative of the monetary approach, excluding its multidimensional reality from the analysis. Using three regression techniques for the period 1990-2010 and controlling for demographic and cyclical factors, this paper examines the relationship between social spending per capita —as the indicator of welfare effort— and poverty in up to 21 countries of the region. The proportion of the population with an income below its national basic basket of goods and services (PM1) and the proportion of population with an income below 50% of the median income per capita (PM2) were the two poverty indicators considered from the monetarist approach to measure poverty. From the capability approach the proportion of the population with food inadequacy (PC1) and the proportion of the population without access to improved water sources or sanitation facilities (PC2) were used. The fi ndings confi rm that social spending is actually useful to explain changes in poverty (PM1, PC1 and PC2), as there is a high negative and signifi cant correlation between the variables before and after controlling for demographic and cyclical factors. In two regression techniques, social spending per capita did not show a negative relationship with the PM2. Countries with greater welfare effort for the period 1990-2010 were not necessarily those with the lowest level of poverty. Ultimately social spending per capita was more useful to explain changes in poverty from the capability approach.