954 resultados para to-sample
Resumo:
Recent results on direct femtosecond inscription of straight low-loss waveguides in borosilicate glass are presented. We also demonstrate lowest ever losses in curvilinear waveguides, which we use as main building blocks for integrated photonics circuits. Low-loss waveguides are of great importance to a variety of applications of integrated optics. We report on recent results of direct femtosecond fabrication of smooth low-loss waveguides in standard optical glass by means of femtosecond chirped-pulse oscillator only (Scientific XL, Femtolasers), operating at the repetition rate of 11 MHz, at the wavelength of 800 nm, with FWHM pulse duration of about 50 fs, and a spectral widths of 30 nm. The pulse energy on target was up to 70 nJ. In transverse inscription geometry, we inscribed waveguides at the depth from 10 to 300 micrometers beneath the surface in the samples of 50 x 50 x 1 mm dimensions made of pure BK7 borosilicate glass. The translation of the samples accomplished by 2D air-bearing stage (Aerotech) with sub-micrometer precision at a speed of up to 100 mm per second (hardware limit). Third direction of translation (Z-, along the inscribing beam or perpendicular to sample plane) allows truly 3D structures to be fabricated. The waveguides were characterized in terms of induced refractive index contrast, their dimensions and cross-sections, mode-field profiles, total insertion losses at both 633 nm and 1550 nm. There was almost no dependence on polarization for the laser inscription. The experimental conditions – depth, laser polarization, pulse energy, translation speed and others, were optimized for minimum insertion losses when coupled to a standard optical fibre SMF-28. We found coincidence of our optimal inscription conditions with recently published by other groups [1, 3] despite significant difference in practically all experimental parameters. Using optimum regime for straight waveguides fabrication, we inscribed a set of curvilinear tracks, which were arranged in a way to ensure the same propagation length (and thus losses) and coupling conditions, while radii of curvature varied from 3 to 10 mm. This allowed us to measure bend-losses – they less than or about 1 dB/cm at R=10 mm radius of curvature. We also demonstrate a possibility to fabricate periodical perturbations of the refractive index in such waveguides with the periods using the same set-up. We demonstrated periods of about 520 nm, which allowed us to fabricate wavelength-selective devices using the same set-up. This diversity as well as very short time for inscription (the optimum translation speed was found to be 40 mm/sec) makes our approach attractive for industrial applications, for example, in next generation high-speed telecom networks.
Resumo:
A paradox of memory research is that repeated checking results in a decrease in memory certainty, memory vividness and confidence [van den Hout, M. A., & Kindt, M. (2003a). Phenomenological validity of an OCD-memory model and the remember/know distinction. Behaviour Research and Therapy, 41, 369–378; van den Hout, M. A., & Kindt, M. (2003b). Repeated checking causes memory distrust. Behaviour Research and Therapy, 41, 301–316]. Although these findings have been mainly attributed to changes in episodic long-term memory, it has been suggested [Shimamura, A. P. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313–323] that representations in working memory could already suffer from detrimental checking. In two experiments we set out to test this hypothesis by employing a delayed-match-to-sample working memory task. Letters had to be remembered in their correct locations, a task that was designed to engage the episodic short-term buffer of working memory [Baddeley, A. D. (2000). The episodic buffer: a new component in working memory? Trends in Cognitive Sciences, 4, 417–423]. Of most importance, we introduced an intermediate distractor question that was prone to induce frustrating and unnecessary checking on trials where no correct answer was possible. Reaction times and confidence ratings on the actual memory test of these trials confirmed the success of this manipulation. Most importantly, high checkers [cf. VOCI; Thordarson, D. S., Radomsky, A. S., Rachman, S., Shafran, R, Sawchuk, C. N., & Hakstian, A. R. (2004). The Vancouver obsessional compulsive inventory (VOCI). Behaviour Research and Therapy, 42(11), 1289–1314] were less accurate than low checkers when frustrating checking was induced, especially if the experimental context actually emphasized the irrelevance of the misleading question. The clinical relevance of this result was substantiated by means of an extreme groups comparison across the two studies. The findings are discussed in the context of detrimental checking and lack of distractor inhibition as a way of weakening fragile bindings within the episodic short-term buffer of Baddeley's (2000) model. Clinical implications, limitations and future research are considered.
Resumo:
Recent results on direct femtosecond inscription of straight low-loss waveguides in borosilicate glass are presented. We also demonstrate lowest ever losses in curvilinear waveguides, which we use as main building blocks for integrated photonics circuits. Low-loss waveguides are of great importance to a variety of applications of integrated optics. We report on recent results of direct femtosecond fabrication of smooth low-loss waveguides in standard optical glass by means of femtosecond chirped-pulse oscillator only (Scientific XL, Femtolasers), operating at the repetition rate of 11 MHz, at the wavelength of 800 nm, with FWHM pulse duration of about 50 fs, and a spectral widths of 30 nm. The pulse energy on target was up to 70 nJ. In transverse inscription geometry, we inscribed waveguides at the depth from 10 to 300 micrometers beneath the surface in the samples of 50 x 50 x 1 mm dimensions made of pure BK7 borosilicate glass. The translation of the samples accomplished by 2D air-bearing stage (Aerotech) with sub-micrometer precision at a speed of up to 100 mm per second (hardware limit). Third direction of translation (Z-, along the inscribing beam or perpendicular to sample plane) allows truly 3D structures to be fabricated. The waveguides were characterized in terms of induced refractive index contrast, their dimensions and cross-sections, mode-field profiles, total insertion losses at both 633 nm and 1550 nm. There was almost no dependence on polarization for the laser inscription. The experimental conditions – depth, laser polarization, pulse energy, translation speed and others, were optimized for minimum insertion losses when coupled to a standard optical fibre SMF-28. We found coincidence of our optimal inscription conditions with recently published by other groups [1, 3] despite significant difference in practically all experimental parameters. Using optimum regime for straight waveguides fabrication, we inscribed a set of curvilinear tracks, which were arranged in a way to ensure the same propagation length (and thus losses) and coupling conditions, while radii of curvature varied from 3 to 10 mm. This allowed us to measure bend-losses – they less than or about 1 dB/cm at R=10 mm radius of curvature. We also demonstrate a possibility to fabricate periodical perturbations of the refractive index in such waveguides with the periods using the same set-up. We demonstrated periods of about 520 nm, which allowed us to fabricate wavelength-selective devices using the same set-up. This diversity as well as very short time for inscription (the optimum translation speed was found to be 40 mm/sec) makes our approach attractive for industrial applications, for example, in next generation high-speed telecom networks.
Resumo:
To reveal the moisture migration mechanism of the unsaturated red clays, which are sensitive to water content change and widely distributed in South China, and then rationally use them as a filling material for highway embankments, a method to measure the water content of red clay cylinders using X-ray computed tomography (CT) was proposed and verified. Then, studies on the moisture migrations in the red clays under the rainfall and ground water level were performed at different degrees of compaction. The results show that the relationship between dry density, water content, and CT value determined from X-ray CT tests can be used to nondestructively measure the water content of red clay cylinders at different migration time, which avoids the error reduced by the sample-to-sample variation. The rainfall, ground water level, and degree of compaction are factors that can significantly affect the moisture migration distance and migration rate. Some techniques, such as lowering groundwater table and increasing degree of compaction of the red clays, can be used to prevent or delay the moisture migration in highway embankments filled with red clays.
Resumo:
Multiple transformative forces target marketing, many of which derive from new technologies that allow us to sample thinking in real time (i.e., brain imaging), or to look at large aggregations of decisions (i.e., big data). There has been an inclination to refer to the intersection of these technologies with the general topic of marketing as “neuromarketing”. There has not been a serious effort to frame neuromarketing, which is the goal of this paper. Neuromarketing can be compared to neuroeconomics, wherein neuroeconomics is generally focused on how individuals make “choices”, and represent distributions of choices. Neuromarketing, in contrast, focuses on how a distribution of choices can be shifted or “influenced”, which can occur at multiple “scales” of behavior (e.g., individual, group, or market/society). Given influence can affect choice through many cognitive modalities, and not just that of valuation of choice options, a science of influence also implies a need to develop a model of cognitive function integrating attention, memory, and reward/aversion function. The paper concludes with a brief description of three domains of neuromarketing application for studying influence, and their caveats.
Resumo:
The purpose of this research was to compare the delivery methods as practiced by higher education faculty teaching distance courses with recommended or emerging standard instructional delivery methods for distance education. Previous research shows that traditional-type instructional strategies have been used in distance education and that there has been no training to distance teach. Secondary data, however, appear to suggest emerging practices which could be pooled toward the development of standards. This is a qualitative study based on the constant comparative analysis approach of grounded theory.^ Participants (N = 5) of this study were full-time faculty teaching distance education courses. The observation method used was unobtrusive content analysis of videotaped instruction. Triangulation of data was accomplished through one-on-one in-depth interviews and from literature review. Due to the addition of non-media content being analyzed, a special time-sampling technique was designed by the researcher--influenced by content analyst theories of media-related data--to sample portions of the videotape instruction that were observed and counted. A standardized interview guide was used to collect data from in-depth interviews. Coding was done based on categories drawn from review of literature, and from Cranton and Weston's (1989) typology of instructional strategies. The data were observed, counted, tabulated, analyzed, and interpreted solely by the researcher. It should be noted however, that systematic and rigorous data collection and analysis led to credible data.^ The findings of this study supported the proposition that there are no standard instructional practices for distance teaching. Further, the findings revealed that of the emerging practices suggested by proponents and by faculty who teach distance education courses, few were practiced even minimally. A noted example was the use of lecture and questioning. Questioning, as a teaching tool was used a great deal, with students at the originating site but not with distance students. Lectures were given, but were mostly conducted in traditional fashion--long in duration and with no interactive component.^ It can be concluded from the findings that while there are no standard practices for instructional delivery for distance education, there appears to be sufficient information from secondary and empirical data to initiate some standard instructional practices. Therefore, grounded in this research data is the theory that the way to arrive at some instructional delivery standards for televised distance education is a pooling of the tacitly agreed-upon emerging practices by proponents and practicing instructors. Implicit in this theory is a need for experimental research so that these emerging practices can be tested, tried, and proven, ultimately resulting in formal standards for instructional delivery in television education. ^
Resumo:
In the field of postmortem toxicology, principles from pharmacology and toxicology are combined in order to determine if exogenous substances contributed to ones death. In order to make this determination postmortem and (whenever available) antemortem blood samples may be analyzed. This project focused on evaluating the relationship between postmortem and antemortem blood drug levels, in order to better define an interpretive framework for postmortem toxicology. To do this, it was imperative to evaluate the differences in antemortem and postmortem drug concentrations, determine the role microbial activity and evaluate drug stability. Microbial studies determined that the bacteria Escherichia coli and Pseudomonas aeruginosa could use the carbon structures of drugs as a source of food. This would suggest prior to sample collection, microbial activity could potentially affect drug levels. This process however would stop before toxicologic evaluation, as at autopsy blood samples are stored in tubes containing the antimicrobial agent sodium fluoride. Analysis of preserved blood determined that under the current storage conditions sodium fluoride effectively inhibited microbial growth. Nonetheless, in many instances inconsistent drug concentrations were identified. When comparing antemortem to postmortem results, diphenhydramine, morphine, codeine and methadone, all showed significantly increased postmortem drug levels. In many instances, increased postmortem concentrations correlated with extended postmortem intervals. Other drugs, such as alprazolam, were likely to have concentration discrepancies when short antemortem to death intervals were coupled with extended postmortem intervals. While still others, such as midazolam followed the expected pattern of metabolism and elimination, which often resulted in decreased postmortem concentrations. The importance of drug stability was displayed when reviewing the clonazepam/ 7-aminoclonazepam data, as the parent drug commonly converted to its metabolite even when stored in the presence of a preservative. In instances of decreasing postmortem drug concentrations the effect of refrigerated storage could not be ruled out. A stability experiment, which contained codeine, produced data that indicated concentrations could continue to decline under the current storage conditions. The cumulative data gathered for this experiment was used to identify concentration trends, which subsequently aided in the development of interpretive considerations for the specific analytes examined in the study.
Resumo:
Anthropogenic habitat alterations and water-management practices have imposed an artificial spatial scale onto the once contiguous freshwater marshes of the Florida Everglades. To gain insight into how these changes may affect biotic communities, we examined whether variation in the abundance and community structure of large fishes (SL . 8 cm) in Everglades marshes varied more at regional or intraregional scales, and whether this variation was related to hydroperiod, water depth, floating mat volume, and vegetation density. From October 1997 to October 2002, we used an airboat electrofisher to sample large fishes at sites within three regions of the Everglades. Each of these regions is subject to unique watermanagement schedules. Dry-down events (water depth , 10 cm) occurred at several sites during spring in 1999, 2000, 2001, and 2002. The 2001 dry-down event was the most severe and widespread. Abundance of several fishes decreased significantly through time, and the number of days post-dry-down covaried significantly with abundance for several species. Processes operating at the regional scale appear to play important roles in regulating large fishes. The most pronounced patterns in abundance and community structure occurred at the regional scale, and the effect size for region was greater than the effect size for sites nested within region for abundance of all species combined, all predators combined, and each of the seven most abundant species. Non-metric multi-dimensional scaling revealed distinct groupings of sites corresponding to the three regions. We also found significant variation in community structure through time that correlated with the number of days post-dry-down. Our results suggest that hydroperiod and water management at the regional scale influence large fish communities of Everglades marshes.
Resumo:
After developing field sampling protocols and making a series of consultations with investigators involved in research in CSSS habitat, we determined that vegetationhydrology interactions within this landscape are best sampled at a combination of scales. At the finer scale, we decided to sample at 100 m intervals along transects that cross the range of habitats present, and at the coarser scale, to conduct an extensive survey of vegetation at sites of known sparrow density dispersed throughout the range of the CSSS. We initiated sampling in the first week of January 2003 and continued it through the last week of May. During this period, we established 6 transects, one in each CSSS subpopulation, completed topographic survey along the Transects A, C, D, and F, and sampled herb and shrub stratum vegetation, soil depth and periphyton along Transects A, and at 179 census points. We also conducted topographic surveys and completed vegetation and soil depth sampling along two of five transects used by ENP researchers for monitoring long-term vegetation change in Taylor Slough. We analyzed the data by summarizing the compositional and structural measures and by using cluster analysis, ordination, weighted averaging regression, and weighted averaging calibration. The mean elevation of transects decreased from north to south, and Transect F had greater variation than other transects. We identified eight vegetation assemblages that can be grouped into two broad categories, ‘wet prairie’ and ‘marsh’. In the 2003 survey, wet prairies were most dominant in the northeastern sub-populations, and had shorter inferred-hydroperiod, higher species richness and shallower soils than marshes, which were common in Subpopulations A, D, and the southernmost regions of Sub-population B. Most of the sites at which birds were observed during 2001 or 2002 had an inferred-hydroperiod of 120-150 days, while no birds were observed at sites with an inferred-hydroperiod less than 120 days or more than 300 days. Management-induced water level changes in Taylor Slought during the 1980’s and 1990’s appeared to elicit parallel changes in vegetation. The results described in detail in the following pages serve as a basis for evaluating and modifying, if necessary, the sampling design and analytical techniques to be used in the next three years of the project.
Resumo:
Supervisory Control & Data Acquisition (SCADA) systems are used by many industries because of their ability to manage sensors and control external hardware. The problem with commercially available systems is that they are restricted to a local network of users that use proprietary software. There was no Internet development guide to give remote users out of the network, control and access to SCADA data and external hardware through simple user interfaces. To solve this problem a server/client paradigm was implemented to make SCADAs available via the Internet. Two methods were applied and studied: polling of a text file as a low-end technology solution and implementing a Transmission Control Protocol (TCP/IP) socket connection. Users were allowed to login to a website and control remotely a network of pumps and valves interfaced to a SCADA. This enabled them to sample the water quality of different reservoir wells. The results were based on real time performance, stability and ease of use of the remote interface and its programming. These indicated that the most feasible server to implement is the TCP/IP connection. For the user interface, Java applets and Active X controls provide the same real time access.
Resumo:
Chloroperoxidase (CPO), a 298-residue glycosylated protein from the fungus Caldariomyces fumago, is probably the most versatile heme enzyme yet discovered. Interest in CPO as a catalyst is based on its power to produce enantiomerically enriched products. Recent research has focused its attention on the ability of CPO to epoxidize alkenes in high regioselectivity and enantioselectivity as an efficient and environmentally benign alternative to traditional synthetic routes. There has been little work on the nature of ligand binding, which probably controls the regio- and enantiospecifity of CPO. Consequently it is here that we focus our work. We report docking calculations and computer simulations aimed at predicting the enantiospecificity of CPO-catalyzed epoxidation of three model substrates. On the basis of this work candidate mutations to improve the efficiency of CPO are predicted. In order to accomplish these aims, a simulated annealing and molecular dynamics protocol is developed to sample potentially reactive substrate/CPO complexes.
Resumo:
The assemblages inhabiting the continental shelf around Antarctica are known to be very patchy, in large part due to deep iceberg impacts. The present study shows that richness and abundance of much deeper benthos, at slope and abyssal depths, also vary greatly in the Southern and South Atlantic oceans. On the ANDEEP III expedition, we deployed 16 Agassiz trawls to sample the zoobenthos at depths from 1055 to 4930 m across the northern Weddell Sea and two South Atlantic basins. A total of 5933 specimens, belonging to 44 higher taxonomic groups, were collected. Overall the most frequent taxa were Ophiuroidea, Bivalvia, Polychaeta and Asteroidea, and the most abundant taxa were Malacostraca, Polychaeta and Bivalvia. Species richness per station varied from 6 to 148. The taxonomic composition of assemblages, based on relative taxon richness, varied considerably between sites but showed no relation to depth. The former three most abundant taxa accounted for 10-30% each of all taxa present. Standardised abundances based on trawl catches varied between 1 and 252 individuals per 1000 m2. Abundance significantly decreased with increasing depth, and assemblages showed high patchiness in their distribution. Cluster analysis based on relative abundance showed changes of community structure that were not linked to depth, area, sediment grain size or temperature. Generally abundances of zoobenthos in the abyssal Weddell Sea are lower than shelf abundances by several orders of magnitude.
Resumo:
Reliable dating of glaciomarine sediments deposited on the Antarctic shelf since the Last Glacial Maximum (LGM) is very challenging because of the general absence of calcareous (micro-) fossils and the recycling of fossil organic matter. As a consequence, radiocarbon (14C) ages of the acid-insoluble organic fraction (AIO) of the sediments bear uncertainties that are very difficult to quantify. In this paper we present the results of three different chronostratigraphic methods to date a sedimentary unit consisting of diatomaceous ooze and diatomaceous mud that was deposited following the last deglaciation at five core sites on the inner shelf in the western Amundsen Sea (West Antarctica). In three cores conventional 14C dating of the AIO in bulk sediment samples yielded age reversals down-core, but at all sites the AIO 14C ages obtained from diatomaceous ooze within the diatom-rich unit yielded similar uncorrected 14C ages ranging from 13,517±56 to 11,543±47 years before present (yr BP). Correction of these ages by subtracting the core-top ages, which are assumed to reflect present-day deposition (as indicated by 21044 Pb dating of the sediment surface at one core site), yielded ages between ca. 10,500 and 8,400 calibrated years before present (cal yr BP). Correction of the AIO ages of the diatomaceous ooze by only subtracting the marine reservoir effect (MRE) of 1,300 years indicated deposition of the diatom-rich sediments between 14,100 and 11,900 cal yr BP. Most of these ages are consistent with age constraints between 13.0 and 8.0 ka BP for the diatom-rich unit, which we obtained by correlating the relative palaeomagnetic intensity (RPI) records of three of the sediment cores with global and regional reference curves for palaeomagnetic intensity. As a third dating technique we applied conventional 53 radiocarbon dating of the AIO included in acid-cleaned diatom hard parts that were extracted from the diatomaceous ooze. This method yielded uncorrected 14C ages of only 5,111±38 and 5,106±38 yr BP, respectively. We reject these young ages, because they are likely to be overprinted by the adsorption of modern atmospheric carbon dioxide onto the surfaces of the extracted diatom hard parts prior to sample graphitisation and combustion for 14C dating. The deposition of the diatom-rich unit in the western Amundsen Sea suggests deglaciation of the inner shelf before ca. 13 ka BP. The deposition of diatomaceous oozes on other parts of the Antarctic shelf around the same time, however, seems to be coincidental rather than directly related.
Resumo:
The Herschel Lensing Survey (HLS) takes advantage of gravitational lensing by massive galaxy clusters to sample a population of high-redshift galaxies which are too faint to be detected above the confusion limit of current far-infrared/submillimeter telescopes. Measurements from 100-500 μm bracket the peaks of the far-infrared spectral energy distributions of these galaxies, characterizing their infrared luminosities and star formation rates. We introduce initial results from our science demonstration phase observations, directed toward the Bullet cluster (1E0657-56). By combining our observations with LABOCA 870 μm and AzTEC 1.1 mm data we fully constrain the spectral energy distributions of 19 MIPS 24 μm-selected galaxies which are located behind the cluster. We find that their colors are best fit using templates based on local galaxies with systematically lower infrared luminosities. This suggests that our sources are not like local ultra-luminous infrared galaxies in which vigorous star formation is contained in a compact highly dust-obscured region. Instead, they appear to be scaled up versions of lower luminosity local galaxies with star formation occurring on larger physical scales.
Resumo:
ACKNOWLEDGMENTS We thank the Geological Survey of Australia for permission to sample the Empress 1A and Lancer 1 cores, the Natural Sciences and Engineering Research Council of Canada for financial support (grant #7961–15) of U. Brand, and the National Natural Science Foundation of China for support of F. Meng and P. Ni (grants 41473039 and 4151101015). We thank M. Lozon (Brock University) for drafting and constructing the figures. We thank the editor, Brendan Murphy, as well as three reviewers (Steve Kesler, Erik Sperling, and an anonymous reviewer), for improving the manuscript into its final form ©The Authors Gold Open Access: This paper is published under the terms of the CC-BY license.