936 resultados para delayed match-to-sample
Resumo:
Recent results on direct femtosecond inscription of straight low-loss waveguides in borosilicate glass are presented. We also demonstrate lowest ever losses in curvilinear waveguides, which we use as main building blocks for integrated photonics circuits. Low-loss waveguides are of great importance to a variety of applications of integrated optics. We report on recent results of direct femtosecond fabrication of smooth low-loss waveguides in standard optical glass by means of femtosecond chirped-pulse oscillator only (Scientific XL, Femtolasers), operating at the repetition rate of 11 MHz, at the wavelength of 800 nm, with FWHM pulse duration of about 50 fs, and a spectral widths of 30 nm. The pulse energy on target was up to 70 nJ. In transverse inscription geometry, we inscribed waveguides at the depth from 10 to 300 micrometers beneath the surface in the samples of 50 x 50 x 1 mm dimensions made of pure BK7 borosilicate glass. The translation of the samples accomplished by 2D air-bearing stage (Aerotech) with sub-micrometer precision at a speed of up to 100 mm per second (hardware limit). Third direction of translation (Z-, along the inscribing beam or perpendicular to sample plane) allows truly 3D structures to be fabricated. The waveguides were characterized in terms of induced refractive index contrast, their dimensions and cross-sections, mode-field profiles, total insertion losses at both 633 nm and 1550 nm. There was almost no dependence on polarization for the laser inscription. The experimental conditions – depth, laser polarization, pulse energy, translation speed and others, were optimized for minimum insertion losses when coupled to a standard optical fibre SMF-28. We found coincidence of our optimal inscription conditions with recently published by other groups [1, 3] despite significant difference in practically all experimental parameters. Using optimum regime for straight waveguides fabrication, we inscribed a set of curvilinear tracks, which were arranged in a way to ensure the same propagation length (and thus losses) and coupling conditions, while radii of curvature varied from 3 to 10 mm. This allowed us to measure bend-losses – they less than or about 1 dB/cm at R=10 mm radius of curvature. We also demonstrate a possibility to fabricate periodical perturbations of the refractive index in such waveguides with the periods using the same set-up. We demonstrated periods of about 520 nm, which allowed us to fabricate wavelength-selective devices using the same set-up. This diversity as well as very short time for inscription (the optimum translation speed was found to be 40 mm/sec) makes our approach attractive for industrial applications, for example, in next generation high-speed telecom networks.
Resumo:
To reveal the moisture migration mechanism of the unsaturated red clays, which are sensitive to water content change and widely distributed in South China, and then rationally use them as a filling material for highway embankments, a method to measure the water content of red clay cylinders using X-ray computed tomography (CT) was proposed and verified. Then, studies on the moisture migrations in the red clays under the rainfall and ground water level were performed at different degrees of compaction. The results show that the relationship between dry density, water content, and CT value determined from X-ray CT tests can be used to nondestructively measure the water content of red clay cylinders at different migration time, which avoids the error reduced by the sample-to-sample variation. The rainfall, ground water level, and degree of compaction are factors that can significantly affect the moisture migration distance and migration rate. Some techniques, such as lowering groundwater table and increasing degree of compaction of the red clays, can be used to prevent or delay the moisture migration in highway embankments filled with red clays.
Resumo:
Multiple transformative forces target marketing, many of which derive from new technologies that allow us to sample thinking in real time (i.e., brain imaging), or to look at large aggregations of decisions (i.e., big data). There has been an inclination to refer to the intersection of these technologies with the general topic of marketing as “neuromarketing”. There has not been a serious effort to frame neuromarketing, which is the goal of this paper. Neuromarketing can be compared to neuroeconomics, wherein neuroeconomics is generally focused on how individuals make “choices”, and represent distributions of choices. Neuromarketing, in contrast, focuses on how a distribution of choices can be shifted or “influenced”, which can occur at multiple “scales” of behavior (e.g., individual, group, or market/society). Given influence can affect choice through many cognitive modalities, and not just that of valuation of choice options, a science of influence also implies a need to develop a model of cognitive function integrating attention, memory, and reward/aversion function. The paper concludes with a brief description of three domains of neuromarketing application for studying influence, and their caveats.
Resumo:
Waste cooking oils can be converted into fuels to provide economical and environmental benefits. One option is to use such fuels in stationary engines for electricity generation, co-generation or tri-generation application. In this study, biodiesel derived from waste cooking oil was tested in an indirect injection type 3-cylinder Lister Petter biodiesel engine. We compared the combustion and emission characteristics with that of fossil diesel operation. The physical and chemical properties of pure biodiesel (B100) and its blends (20% and 60% vol.) were measured and compared with those of diesel. With pure biodiesel fuel, full engine power was achieved and the cylinder gas pressure diagram showed stable operation. At full load, peak cylinder pressure of B100 operation was almost similar to diesel and peak burn rate of combustion was about 13% higher than diesel. For biodiesel operation, occurrences of peak burn rates were delayed compared to diesel. Fuel line injection pressure was increased by 8.5-14.5% at all loads. In comparison to diesel, the start of combustion was delayed and 90% combustion occurred earlier. At full load, the total combustion duration of B100 operation was almost 16% lower than diesel. Biodiesel exhaust gas emissions contained 3% higher CO2 and 4% lower NOx, as compared to diesel. CO emissions were similar at low load condition, but were decreased by 15 times at full load. Oxygen emission decreased by around 1.5%. Exhaust gas temperatures were almost similar for both biodiesel and diesel operation. At full engine load, the brake specific fuel consumption (on a volume basis) and brake thermal efficiency were respectively about 2.5% and 5% higher compared to diesel. Full engine power was achieved with both blends, and little difference in engine performance and emission results were observed between 20% and 60% blends. The study concludes that biodiesel derived from waste cooking oil gave better efficiency and lower NOx emissions than standard diesel. Copyright © 2012 SAE International.
Resumo:
A tanulmány a japán vállalatvezetés kialakulásával, történelmi és társadalmi gyökereivel, valamint intézményesülésének vizsgálatával foglalkozik. Első részében a szerző áttekinti, milyen főbb földrajzi és történelmi tényezők, körülmények játszottak szerepet a japán vezetési gyakorlat formálódásában. Ezt követően egy meghatározott történelmi idősíkon követi nyomon az amerikai menedzsmenttörténet egyes korszakainak megfelelő japán változásokat, vállalatvezetési reakciókat – kezdve a klasszikus menedzsment tanaitól (taylorizmus, racionalizálás...) egészen a mai kor globális versenye által támasztott követelményekig. Rámutat arra, mikor és miért késtek fontos váltások Japánban a vállalatvezetésben élenjáró Amerikához képest, majd értékeli ennek elméleti és gyakorlati jelentőségét. Szándéka szerint ez a történelmi áttekintés képez alapot napjaink Japánnal kapcsolatos gazdasági-társadalmi eseményeinek, reformjainak értékeléséhez is. ____ This paper describes the development, the roots and the process of institutionalization of corporate management in Japan. In the first half the author will look at what major geographical factors and cultural characteristics could play a role in shaping the Japanese management practices. Then, along a historical time line, he scrolls through the Japanese reactions given to each of the major American historical eras: from the classical management (Taylorism, rationalization...) up to the challenges of today’s global competition. It is pointed out where paradigm shifts were delayed compared to the US, and while providing potential reasons, the author evaluates the significance of the differences. This historical comparison ought to provide a basis and scope of understanding for today’s events and ongoing reforms in Japan.
Resumo:
The purpose of this research was to compare the delivery methods as practiced by higher education faculty teaching distance courses with recommended or emerging standard instructional delivery methods for distance education. Previous research shows that traditional-type instructional strategies have been used in distance education and that there has been no training to distance teach. Secondary data, however, appear to suggest emerging practices which could be pooled toward the development of standards. This is a qualitative study based on the constant comparative analysis approach of grounded theory.^ Participants (N = 5) of this study were full-time faculty teaching distance education courses. The observation method used was unobtrusive content analysis of videotaped instruction. Triangulation of data was accomplished through one-on-one in-depth interviews and from literature review. Due to the addition of non-media content being analyzed, a special time-sampling technique was designed by the researcher--influenced by content analyst theories of media-related data--to sample portions of the videotape instruction that were observed and counted. A standardized interview guide was used to collect data from in-depth interviews. Coding was done based on categories drawn from review of literature, and from Cranton and Weston's (1989) typology of instructional strategies. The data were observed, counted, tabulated, analyzed, and interpreted solely by the researcher. It should be noted however, that systematic and rigorous data collection and analysis led to credible data.^ The findings of this study supported the proposition that there are no standard instructional practices for distance teaching. Further, the findings revealed that of the emerging practices suggested by proponents and by faculty who teach distance education courses, few were practiced even minimally. A noted example was the use of lecture and questioning. Questioning, as a teaching tool was used a great deal, with students at the originating site but not with distance students. Lectures were given, but were mostly conducted in traditional fashion--long in duration and with no interactive component.^ It can be concluded from the findings that while there are no standard practices for instructional delivery for distance education, there appears to be sufficient information from secondary and empirical data to initiate some standard instructional practices. Therefore, grounded in this research data is the theory that the way to arrive at some instructional delivery standards for televised distance education is a pooling of the tacitly agreed-upon emerging practices by proponents and practicing instructors. Implicit in this theory is a need for experimental research so that these emerging practices can be tested, tried, and proven, ultimately resulting in formal standards for instructional delivery in television education. ^
Resumo:
In the field of postmortem toxicology, principles from pharmacology and toxicology are combined in order to determine if exogenous substances contributed to ones death. In order to make this determination postmortem and (whenever available) antemortem blood samples may be analyzed. This project focused on evaluating the relationship between postmortem and antemortem blood drug levels, in order to better define an interpretive framework for postmortem toxicology. To do this, it was imperative to evaluate the differences in antemortem and postmortem drug concentrations, determine the role microbial activity and evaluate drug stability. Microbial studies determined that the bacteria Escherichia coli and Pseudomonas aeruginosa could use the carbon structures of drugs as a source of food. This would suggest prior to sample collection, microbial activity could potentially affect drug levels. This process however would stop before toxicologic evaluation, as at autopsy blood samples are stored in tubes containing the antimicrobial agent sodium fluoride. Analysis of preserved blood determined that under the current storage conditions sodium fluoride effectively inhibited microbial growth. Nonetheless, in many instances inconsistent drug concentrations were identified. When comparing antemortem to postmortem results, diphenhydramine, morphine, codeine and methadone, all showed significantly increased postmortem drug levels. In many instances, increased postmortem concentrations correlated with extended postmortem intervals. Other drugs, such as alprazolam, were likely to have concentration discrepancies when short antemortem to death intervals were coupled with extended postmortem intervals. While still others, such as midazolam followed the expected pattern of metabolism and elimination, which often resulted in decreased postmortem concentrations. The importance of drug stability was displayed when reviewing the clonazepam/ 7-aminoclonazepam data, as the parent drug commonly converted to its metabolite even when stored in the presence of a preservative. In instances of decreasing postmortem drug concentrations the effect of refrigerated storage could not be ruled out. A stability experiment, which contained codeine, produced data that indicated concentrations could continue to decline under the current storage conditions. The cumulative data gathered for this experiment was used to identify concentration trends, which subsequently aided in the development of interpretive considerations for the specific analytes examined in the study.
Resumo:
Anthropogenic habitat alterations and water-management practices have imposed an artificial spatial scale onto the once contiguous freshwater marshes of the Florida Everglades. To gain insight into how these changes may affect biotic communities, we examined whether variation in the abundance and community structure of large fishes (SL . 8 cm) in Everglades marshes varied more at regional or intraregional scales, and whether this variation was related to hydroperiod, water depth, floating mat volume, and vegetation density. From October 1997 to October 2002, we used an airboat electrofisher to sample large fishes at sites within three regions of the Everglades. Each of these regions is subject to unique watermanagement schedules. Dry-down events (water depth , 10 cm) occurred at several sites during spring in 1999, 2000, 2001, and 2002. The 2001 dry-down event was the most severe and widespread. Abundance of several fishes decreased significantly through time, and the number of days post-dry-down covaried significantly with abundance for several species. Processes operating at the regional scale appear to play important roles in regulating large fishes. The most pronounced patterns in abundance and community structure occurred at the regional scale, and the effect size for region was greater than the effect size for sites nested within region for abundance of all species combined, all predators combined, and each of the seven most abundant species. Non-metric multi-dimensional scaling revealed distinct groupings of sites corresponding to the three regions. We also found significant variation in community structure through time that correlated with the number of days post-dry-down. Our results suggest that hydroperiod and water management at the regional scale influence large fish communities of Everglades marshes.
Resumo:
Lineup procedures have recently garnered extensive empirical attention, in an effort to reduce the number of mistaken identifications that plague the criminal justice system. Relatively little attention, however, has been paid to the influence of the lineup constructor or the lineup construction technique on the quality of the lineup. This study examined whether the cross-race effect has an influence on the quality of lineups constructed using a match-to-suspect or match-to-description technique in a series of three phases. Participants generated descriptions of same- and other-race targets in Phase 1, which were used in Phase 2. In Phase 2, participants were asked to create lineups for own-race targets and other-race targets using one of two techniques. The lineups created in this phase were examined for lineup quality in Phase 3 by calculating lineup fairness assessments through the use of a mock witness paradigm. ^ Overall, the results of these experiment phases suggest that the race of those involved in the lineup construction process influences lineups. There was no difference in witness description accuracy in Phase 1, which ran counter to predictions based on the cross-race effect. The cross-race effect was observed, however, in Phases 2 and 3. The lineup construction technique used also influenced several of the process measures, selection estimates, and fairness judgments in Phase 2. Interestingly, the presence of the cross-race effect was in the opposite direction as predicted for some measures in both phases. In Phase 2, the cross-race effect was as predicted for number of foils viewed, but in the opposite direction for average time spent viewing each foil. In Phase 3, the cross-race effect was in the opposite direction than predicted, with higher levels of lineup fairness in other-race lineups. The practical implications of these findings are discussed in relation to lineup fairness within the legal system. ^
Resumo:
With the increased antibiotic exposure from anthropogenic sources, soil microbes are an ever-increasing ecological pool of resistant bacteria. This is the case with bacterial resistance to vancomycin through transfer of van-resistance genes by transposons. Studies show that bacterial species other than enteroccoci harbor genetic-like elements such as the Tn1546 transposon containing vancomycin-resistant genes. Overuse and misuse of antibiotics in hospital settings and agricultural practices have led to an increase in transferability of vancomycin-resistant genes among microbes. The objective of this project is to analyze the diversity of these genes found in the soil microbes from Miami-Dade County. Bacterial isolates were Gram-stained and the Kirby-Bauer antibiotic disk diffusion test was performed to determine the degree of resistance. Results showed that all bacterial isolates were resistant to penicillin at the 10 µg concentration and most were susceptible to varying vancomycin concentrations (10 µg, 20 µg, and 30 µg). A 1465 bp fragment was amplified from the 16S rDNA gene using 27F and 1492R universal primers from the multi-antibiotic resistant bacteria and sequenced to identify the isolates. Three Gram-negative bacteria genera were identified with the closest phylogenetic match to: Pseudomonas sp., Stenotrophomonas sp., Xanthomonas sp., as well as two Gram-positive bacteria genera: Bacillus sp. and Brevibacillus sp. The isolates’ vanA and vanB genes were amplified using the respective primers. Ongoing work is underway to sequence and compare these known van resistant genes, with the goal of revealing intrinsic vancomycin resistance present in soil bacteria.
Resumo:
After developing field sampling protocols and making a series of consultations with investigators involved in research in CSSS habitat, we determined that vegetationhydrology interactions within this landscape are best sampled at a combination of scales. At the finer scale, we decided to sample at 100 m intervals along transects that cross the range of habitats present, and at the coarser scale, to conduct an extensive survey of vegetation at sites of known sparrow density dispersed throughout the range of the CSSS. We initiated sampling in the first week of January 2003 and continued it through the last week of May. During this period, we established 6 transects, one in each CSSS subpopulation, completed topographic survey along the Transects A, C, D, and F, and sampled herb and shrub stratum vegetation, soil depth and periphyton along Transects A, and at 179 census points. We also conducted topographic surveys and completed vegetation and soil depth sampling along two of five transects used by ENP researchers for monitoring long-term vegetation change in Taylor Slough. We analyzed the data by summarizing the compositional and structural measures and by using cluster analysis, ordination, weighted averaging regression, and weighted averaging calibration. The mean elevation of transects decreased from north to south, and Transect F had greater variation than other transects. We identified eight vegetation assemblages that can be grouped into two broad categories, ‘wet prairie’ and ‘marsh’. In the 2003 survey, wet prairies were most dominant in the northeastern sub-populations, and had shorter inferred-hydroperiod, higher species richness and shallower soils than marshes, which were common in Subpopulations A, D, and the southernmost regions of Sub-population B. Most of the sites at which birds were observed during 2001 or 2002 had an inferred-hydroperiod of 120-150 days, while no birds were observed at sites with an inferred-hydroperiod less than 120 days or more than 300 days. Management-induced water level changes in Taylor Slought during the 1980’s and 1990’s appeared to elicit parallel changes in vegetation. The results described in detail in the following pages serve as a basis for evaluating and modifying, if necessary, the sampling design and analytical techniques to be used in the next three years of the project.
Resumo:
Supervisory Control & Data Acquisition (SCADA) systems are used by many industries because of their ability to manage sensors and control external hardware. The problem with commercially available systems is that they are restricted to a local network of users that use proprietary software. There was no Internet development guide to give remote users out of the network, control and access to SCADA data and external hardware through simple user interfaces. To solve this problem a server/client paradigm was implemented to make SCADAs available via the Internet. Two methods were applied and studied: polling of a text file as a low-end technology solution and implementing a Transmission Control Protocol (TCP/IP) socket connection. Users were allowed to login to a website and control remotely a network of pumps and valves interfaced to a SCADA. This enabled them to sample the water quality of different reservoir wells. The results were based on real time performance, stability and ease of use of the remote interface and its programming. These indicated that the most feasible server to implement is the TCP/IP connection. For the user interface, Java applets and Active X controls provide the same real time access.
Resumo:
Chloroperoxidase (CPO), a 298-residue glycosylated protein from the fungus Caldariomyces fumago, is probably the most versatile heme enzyme yet discovered. Interest in CPO as a catalyst is based on its power to produce enantiomerically enriched products. Recent research has focused its attention on the ability of CPO to epoxidize alkenes in high regioselectivity and enantioselectivity as an efficient and environmentally benign alternative to traditional synthetic routes. There has been little work on the nature of ligand binding, which probably controls the regio- and enantiospecifity of CPO. Consequently it is here that we focus our work. We report docking calculations and computer simulations aimed at predicting the enantiospecificity of CPO-catalyzed epoxidation of three model substrates. On the basis of this work candidate mutations to improve the efficiency of CPO are predicted. In order to accomplish these aims, a simulated annealing and molecular dynamics protocol is developed to sample potentially reactive substrate/CPO complexes.
Resumo:
The assemblages inhabiting the continental shelf around Antarctica are known to be very patchy, in large part due to deep iceberg impacts. The present study shows that richness and abundance of much deeper benthos, at slope and abyssal depths, also vary greatly in the Southern and South Atlantic oceans. On the ANDEEP III expedition, we deployed 16 Agassiz trawls to sample the zoobenthos at depths from 1055 to 4930 m across the northern Weddell Sea and two South Atlantic basins. A total of 5933 specimens, belonging to 44 higher taxonomic groups, were collected. Overall the most frequent taxa were Ophiuroidea, Bivalvia, Polychaeta and Asteroidea, and the most abundant taxa were Malacostraca, Polychaeta and Bivalvia. Species richness per station varied from 6 to 148. The taxonomic composition of assemblages, based on relative taxon richness, varied considerably between sites but showed no relation to depth. The former three most abundant taxa accounted for 10-30% each of all taxa present. Standardised abundances based on trawl catches varied between 1 and 252 individuals per 1000 m2. Abundance significantly decreased with increasing depth, and assemblages showed high patchiness in their distribution. Cluster analysis based on relative abundance showed changes of community structure that were not linked to depth, area, sediment grain size or temperature. Generally abundances of zoobenthos in the abyssal Weddell Sea are lower than shelf abundances by several orders of magnitude.
Resumo:
Reliable dating of glaciomarine sediments deposited on the Antarctic shelf since the Last Glacial Maximum (LGM) is very challenging because of the general absence of calcareous (micro-) fossils and the recycling of fossil organic matter. As a consequence, radiocarbon (14C) ages of the acid-insoluble organic fraction (AIO) of the sediments bear uncertainties that are very difficult to quantify. In this paper we present the results of three different chronostratigraphic methods to date a sedimentary unit consisting of diatomaceous ooze and diatomaceous mud that was deposited following the last deglaciation at five core sites on the inner shelf in the western Amundsen Sea (West Antarctica). In three cores conventional 14C dating of the AIO in bulk sediment samples yielded age reversals down-core, but at all sites the AIO 14C ages obtained from diatomaceous ooze within the diatom-rich unit yielded similar uncorrected 14C ages ranging from 13,517±56 to 11,543±47 years before present (yr BP). Correction of these ages by subtracting the core-top ages, which are assumed to reflect present-day deposition (as indicated by 21044 Pb dating of the sediment surface at one core site), yielded ages between ca. 10,500 and 8,400 calibrated years before present (cal yr BP). Correction of the AIO ages of the diatomaceous ooze by only subtracting the marine reservoir effect (MRE) of 1,300 years indicated deposition of the diatom-rich sediments between 14,100 and 11,900 cal yr BP. Most of these ages are consistent with age constraints between 13.0 and 8.0 ka BP for the diatom-rich unit, which we obtained by correlating the relative palaeomagnetic intensity (RPI) records of three of the sediment cores with global and regional reference curves for palaeomagnetic intensity. As a third dating technique we applied conventional 53 radiocarbon dating of the AIO included in acid-cleaned diatom hard parts that were extracted from the diatomaceous ooze. This method yielded uncorrected 14C ages of only 5,111±38 and 5,106±38 yr BP, respectively. We reject these young ages, because they are likely to be overprinted by the adsorption of modern atmospheric carbon dioxide onto the surfaces of the extracted diatom hard parts prior to sample graphitisation and combustion for 14C dating. The deposition of the diatom-rich unit in the western Amundsen Sea suggests deglaciation of the inner shelf before ca. 13 ka BP. The deposition of diatomaceous oozes on other parts of the Antarctic shelf around the same time, however, seems to be coincidental rather than directly related.