919 resultados para Manual transport of loads
Resumo:
A mathematical model incorporating many of the important processes at work in the crystallization of emulsions is presented. The model describes nucleation within the discontinuous domain of an emulsion, precipitation in the continuous domain, transport of monomers between the two domains, and formation and subsequent growth of crystals in both domains. The model is formulated as an autonomous system of nonlinear, coupled ordinary differential equations. The description of nucleation and precipitation is based upon the Becker–Döring equations of classical nucleation theory. A particular feature of the model is that the number of particles of all species present is explicitly conserved; this differs from work that employs Arrhenius descriptions of nucleation rate. Since the model includes many physical effects, it is analyzed in stages so that the role of each process may be understood. When precipitation occurs in the continuous domain, the concentration of monomers falls below the equilibrium concentration at the surface of the drops of the discontinuous domain. This leads to a transport of monomers from the drops into the continuous domain that are then incorporated into crystals and nuclei. Since the formation of crystals is irreversible and their subsequent growth inevitable, crystals forming in the continuous domain effectively act as a sink for monomers “sucking” monomers from the drops. In this case, numerical calculations are presented which are consistent with experimental observations. In the case in which critical crystal formation does not occur, the stationary solution is found and a linear stability analysis is performed. Bifurcation diagrams describing the loci of stationary solutions, which may be multiple, are numerically calculated.
Resumo:
The National Center for Atmospheric Research-Community Climate System Model (NCAR-CCSM) is used in a coupled atmosphere–ocean–sea-ice simulation of the Last Glacial Maximum (LGM, around 21,000 years ago) climate. In the tropics, the simulation shows a moderate cooling of 3 °C over land and 2 °C in the ocean in zonal average. This cooling is about 1 °C cooler than the CLIMAP sea surface temperatures (SSTs) but consistent with recent estimates of both land and sea surface temperature changes. Subtropical waters are cooled by 2–2.5 °C, also in agreement with recent estimates. The simulated oceanic thermohaline circulation at the LGM is not only shallower but also weaker than the modern with a migration of deep-water formation site in the North Atlantic as suggested by the paleoceanographic evidences. The simulated northward flow of Antarctic Bottom Water (AABW) is enhanced. These deep circulation changes are attributable to the increased surface density flux in the Southern Ocean caused by sea-ice expansion at the LGM. Both the Gulf Stream and the Kuroshio are intensified due to the overall increase of wind stress over the subtropical oceans. The intensified zonal wind stress and southward shift of its maximum in the Southern Ocean effectively enhances the transport of the Antarctic Circumpolar Current (ACC) by more than 50%. Simulated SSTs are lowered by up to 8 °C in the midlatitudes. Simulated conditions in the North Atlantic are warmer and with less sea-ice than indicated by CLIMAP again, in agreement with more recent estimates. The increased meridional SST gradient at the LGM results in an enhanced Hadley Circulation and increased midlatitude storm track precipitation. The increased baroclinic storm activity also intensifies the meridional atmospheric heat transport. A sensitivity experiment shows that about half of the simulated tropical cooling at the LGM originates from reduced atmospheric concentrations of greenhouse gases.
Resumo:
Whole-genome sequencing (WGS) could potentially provide a single platform for extracting all the information required to predict an organism’s phenotype. However, its ability to provide accurate predictions has not yet been demonstrated in large independent studies of specific organisms. In this study, we aimed to develop a genotypic prediction method for antimicrobial susceptibilities. The whole genomes of 501 unrelated Staphylococcus aureus isolates were sequenced, and the assembled genomes were interrogated using BLASTn for a panel of known resistance determinants (chromosomal mutations and genes carried on plasmids). Results were compared with phenotypic susceptibility testing for 12 commonly used antimicrobial agents (penicillin, methicillin, erythromycin, clindamycin, tetracycline, ciprofloxacin, vancomycin, trimethoprim, gentamicin, fusidic acid, rifampin, and mupirocin) performed by the routine clinical laboratory. We investigated discrepancies by repeat susceptibility testing and manual inspection of the sequences and used this information to optimize the resistance determinant panel and BLASTn algorithm. We then tested performance of the optimized tool in an independent validation set of 491 unrelated isolates, with phenotypic results obtained in duplicate by automated broth dilution (BD Phoenix) and disc diffusion. In the validation set, the overall sensitivity and specificity of the genomic prediction method were 0.97 (95% confidence interval [95% CI], 0.95 to 0.98) and 0.99 (95% CI, 0.99 to 1), respectively, compared to standard susceptibility testing methods. The very major error rate was 0.5%, and the major error rate was 0.7%. WGS was as sensitive and specific as routine antimicrobial susceptibility testing methods. WGS is a promising alternative to culture methods for resistance prediction in S. aureus and ultimately other major bacterial pathogens.
Resumo:
There are well-known difficulties in making measurements of the moisture content of baked goods (such as bread, buns, biscuits, crackers and cake) during baking or at the oven exit; in this paper several sensing methods are discussed, but none of them are able to provide direct measurement with sufficient precision. An alternative is to use indirect inferential methods. Some of these methods involve dynamic modelling, with incorporation of thermal properties and using techniques familiar in computational fluid dynamics (CFD); a method of this class that has been used for the modelling of heat and mass transfer in one direction during baking is summarized, which may be extended to model transport of moisture within the product and also within the surrounding atmosphere. The concept of injecting heat during the baking process proportional to the calculated heat load on the oven has been implemented in a control scheme based on heat balance zone by zone through a continuous baking oven, taking advantage of the high latent heat of evaporation of water. Tests on biscuit production ovens are reported, with results that support a claim that the scheme gives more reproducible water distribution in the final product than conventional closed loop control of zone ambient temperatures, thus enabling water content to be held more closely within tolerance.
Resumo:
Though many global aerosols models prognose surface deposition, only a few models have been used to directly simulate the radiative effect from black carbon (BC) deposition to snow and sea ice. Here, we apply aerosol deposition fields from 25 models contributing to two phases of the Aerosol Comparisons between Observations and Models (AeroCom) project to simulate and evaluate within-snow BC concentrations and radiative effect in the Arctic. We accomplish this by driving the offline land and sea ice components of the Community Earth System Model with different deposition fields and meteorological conditions from 2004 to 2009, during which an extensive field campaign of BC measurements in Arctic snow occurred. We find that models generally underestimate BC concentrations in snow in northern Russia and Norway, while overestimating BC amounts elsewhere in the Arctic. Although simulated BC distributions in snow are poorly correlated with measurements, mean values are reasonable. The multi-model mean (range) bias in BC concentrations, sampled over the same grid cells, snow depths, and months of measurements, are −4.4 (−13.2 to +10.7) ng g−1 for an earlier phase of AeroCom models (phase I), and +4.1 (−13.0 to +21.4) ng g−1 for a more recent phase of AeroCom models (phase II), compared to the observational mean of 19.2 ng g−1. Factors determining model BC concentrations in Arctic snow include Arctic BC emissions, transport of extra-Arctic aerosols, precipitation, deposition efficiency of aerosols within the Arctic, and meltwater removal of particles in snow. Sensitivity studies show that the model–measurement evaluation is only weakly affected by meltwater scavenging efficiency because most measurements were conducted in non-melting snow. The Arctic (60–90° N) atmospheric residence time for BC in phase II models ranges from 3.7 to 23.2 days, implying large inter-model variation in local BC deposition efficiency. Combined with the fact that most Arctic BC deposition originates from extra-Arctic emissions, these results suggest that aerosol removal processes are a leading source of variation in model performance. The multi-model mean (full range) of Arctic radiative effect from BC in snow is 0.15 (0.07–0.25) W m−2 and 0.18 (0.06–0.28) W m−2 in phase I and phase II models, respectively. After correcting for model biases relative to observed BC concentrations in different regions of the Arctic, we obtain a multi-model mean Arctic radiative effect of 0.17 W m−2 for the combined AeroCom ensembles. Finally, there is a high correlation between modeled BC concentrations sampled over the observational sites and the Arctic as a whole, indicating that the field campaign provided a reasonable sample of the Arctic.
Resumo:
It is shown from flux transfer event (FTE) occurrence statistics, observed as a function of MLT by the ISEE satellites, that recent 2-dimensional analytic theories of the effects of pulsed Petschek reconnection predict FTEs to contribute between 50 and 200 kV to the total reconnection voltage when the magnetosheath field points southward. The upper limit (200 kV) allows the possibility that FTEs provide all the antisunward transport of open field lines into the tail lobe. This range is compared with the voltages associated with series of FTEs signatures, as inferred from ground-based observations, which are in the range 10–60 kV. We conclude that the contribution could sometimes be made by a series of single, large events; however, the voltage is often likely to be contributed by several FTEs at different MLT.
Resumo:
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.
Resumo:
Resistive respiratory loading is an established stimulus for the induction of experimental dyspnoea. In comparison to unloaded breathing, resistive loaded breathing alters end-tidal CO2 (PETCO2), which has independent physiological effects (e.g. upon cerebral blood flow). We investigated the subjective effects of resistive loaded breathing with stabilized PETCO2 (isocapnia) during manual control of inspired gases on varying baseline levels of mild hypercapnia increased PETCO2). Furthermore, to investigate whether perceptual habituation to dyspnoea stimuli occurs, the study was repeated over four experimental sessions. Isocapnic hypercapnia did not affect dyspnoea unpleasantness during resistive loading. A post hoc analysis revealed a small increase of respiratory unpleasantness during unloaded breathing at +0.6 kPa, the level that reliably induced isocapnia. We didnot observe perceptual habituation over the four sessions. We conclude that isocapnic respiratory loading allows stable induction of respiratory unpleasantness, making it a good stimulus for multi-session studies of dyspnoea.
Resumo:
The relationship between springtime air pollution transport of ozone (O3) and carbon monoxide (CO) and mid-latitude cyclones is explored for the first time using the Monitoring Atmospheric Composition and Climate (MACC) reanalysis for the period 2003–2012. In this study, the most intense spring storms (95th percentile) are selected for two regions, the North Pacific (NP) and the North Atlantic (NA). These storms (∼60 storms over each region) often track over the major emission sources of East Asia and eastern North America. By compositing the storms, the distributions of O3 and CO within a "typical" intense storm are examined. We compare the storm-centered composite to background composites of "average conditions" created by sampling the reanalysis data of the previous year to the storm locations. Mid-latitude storms are found to redistribute concentrations of O3 and CO horizontally and vertically throughout the storm. This is clearly shown to occur through two main mechanisms: (1) vertical lifting of CO-rich and O3-poor air isentropically, from near the surface to the mid- to upper-troposphere in the region of the warm conveyor belt; and (2) descent of O3-rich and CO-poor air isentropically in the vicinity of the dry intrusion, from the stratosphere toward the mid-troposphere. This can be seen in the composite storm's life cycle as the storm intensifies, with area-averaged O3 (CO) increasing (decreasing) between 200 and 500 hPa. The influence of the storm dynamics compared to the background environment on the composition within an area around the storm center at the time of maximum intensity is as follows. Area-averaged O3 at 300 hPa is enhanced by 50 and 36% and by 11 and 7.6% at 500 hPa for the NP and NA regions, respectively. In contrast, area-averaged CO at 300 hPa decreases by 12% for NP and 5.5% for NA, and area-averaged CO at 500 hPa decreases by 2.4% for NP while there is little change over the NA region. From the mid-troposphere, O3-rich air is clearly seen to be transported toward the surface, but the downward transport of CO-poor air is not discernible due to the high levels of CO in the lower troposphere. Area-averaged O3 is slightly higher at 1000 hPa (3.5 and 1.8% for the NP and NA regions, respectively). There is an increase of CO at 1000 hPa for the NP region (3.3%) relative to the background composite and a~slight decrease in area-averaged CO for the NA region at 1000 hPa (-2.7%).
Resumo:
The interaction between polynyas and the atmospheric boundary layer is examined in the Laptev Sea using the regional, non-hydrostatic Consortium for Small-scale Modelling (COSMO) atmosphere model. A thermodynamic sea-ice model is used to consider the response of sea-ice surface temperature to idealized atmospheric forcing. The idealized regimes represent atmospheric conditions that are typical for the Laptev Sea region. Cold wintertime conditions are investigated with sea-ice–ocean temperature differences of up to 40 K. The Laptev Sea flaw polynyas strongly modify the atmospheric boundary layer. Convectively mixed layers reach heights of up to 1200 m above the polynyas with temperature anomalies of more than 5 K. Horizontal transport of heat expands to areas more than 500 km downstream of the polynyas. Strong wind regimes lead to a more shallow mixed layer with strong near-surface modifications, while weaker wind regimes show a deeper, well-mixed convective boundary layer. Shallow mesoscale circulations occur in the vicinity of ice-free and thin-ice covered polynyas. They are forced by large turbulent and radiative heat fluxes from the surface of up to 789 W m−2, strong low-level thermally induced convergence and cold air flow from the orographic structure of the Taimyr Peninsula in the western Laptev Sea region. Based on the surface energy balance we derive potential sea-ice production rates between 8 and 25 cm d−1. These production rates are mainly determined by whether the polynyas are ice-free or covered by thin ice and by the wind strength.
Resumo:
This paper examines the role of the Arctic Ocean Atlantic water (AW) in modifying the Laptev Sea shelf bottom hydrography on the basis of historical records from 1932 to 2008, field observations carried out in April–May 2008, and 2002–2009 cross‐slope measurements. A climatology of bottom hydrography demonstrates warming that extends offshore from the 30–50 m depth contour. Bottom layer temperature‐time series constructed from historical records links the Laptev Sea outer shelf to the AW boundary current transporting warm and saline water from the North Atlantic. The AW warming of the mid‐1990s and the mid‐2000s is consistent with outer shelf bottom temperature variability. For April–May 2008 we observed on‐shelf near‐bottom warm and saline water intrusions up to the 20 m isobath. These intrusions are typically about 0.2°C warmer and 1–1.5 practical salinity units saltier than ambient water. The 2002–2009 cross‐slope observations are suggestive for the continental slope upward heat flux from the AW to the overlying low‐halocline water (LHW). The lateral on‐shelf wind‐driven transport of the LHW then results in the bottom layer thermohaline anomalies recorded over the Laptev Sea shelf. We also found that polynya‐induced vertical mixing may act as a drainage of the bottom layer, permitting a relatively small portion of the AW heat to be directly released to the atmosphere. Finally, we see no significant warming (up until now) over the Laptev Sea shelf deeper than 10–15 m in the historical record. Future climate change, however, may bring more intrusions of Atlantic‐modified waters with potentially warmer temperature onto the shelf, which could have a critical impact on the stability of offshore submarine permafrost.
Resumo:
During seedling establishment, cotyledons of the rain forest tree Hymenaea courbaril mobilize storage cell wall xyloglucan to sustain growth. The polysaccharide is degraded and its products are transported to growing sink tissues. Auxin from the shoot controls the level of xyloglucan hydrolytic enzymes. It is not yet known how important the expression of these genes is for the control of storage xyloglucan degradation. In this work, partial cDNAs of the genes xyloglucan transglycosylase hydrolase (HcXTH1) and beta-galactosidase (HcBGAL1), both related to xyloglucan degradation, and two other genes related to sucrose metabolism [alkaline invertase (HcAlkIN1) and sucrose synthase (HcSUS1)], were isolated. The partial sequences were characterized by comparison with sequences available in the literature, and phylogenetic trees were assembled. Gene expression was evaluated at intervals of 6 h during 24 h in cotyledons, hypocotyl, roots, and leaves, using 45-d-old plantlets. HcXTH1 and HcBGAL1 were correlated to xyloglucan degradation and responded to auxin and light, being down-regulated when transport of auxin was prevented by N-1-naphthylphthalamic acid (NPA) and stimulated by constant light. Genes related to sucrose metabolism, HcAlkIN1 and HcSUS1, responded to inhibition of auxin transport in consonance with storage mobilization in the cotyledons. A model is proposed suggesting that auxin and light are involved in the control of the expression of genes related to storage xyloglucan mobilization in seedlings of H. courbaril. It is concluded that gene expression plays a role in the control of the intercommunication system of the source-sink relationship during seeding growth, favouring its establishment in the shaded environment of the rain forest understorey.
Resumo:
Most techniques used for estimating the age of Sotalia guianensis (van B,n,den, 1864) (Cetacea; Delphinidae) are very expensive, and require sophisticated equipment for preparing histological sections of teeth. The objective of this study was to test a more affordable and much simpler method, involving of the manual wear of teeth followed by decalcification and observation under a stereomicroscope. This technique has been employed successfully with larger species of Odontoceti. Twenty-six specimens were selected, and one tooth of each specimen was worn and demineralized for growth layers reading. Growth layers were evidenced in all specimens; however, in 4 of the 26 teeth, not all the layers could be clearly observed. In these teeth, there was a significant decrease of growth layer group thickness, thus hindering the layers count. The juxtaposition of layers hindered the reading of larger numbers of layers by the wear and decalcification technique. Analysis of more than 17 layers in a single tooth proved inconclusive. The method applied here proved to be efficient in estimating the age of Sotalia guianensis individuals younger than 18 years. This method could simplify the study of the age structure of the overall population, and allows the use of the more expensive methodologies to be confined to more specific studies of older specimens. It also enables the classification of the calf, young and adult classes, which is important for general population studies.