897 resultados para high dimensional geometry
Resumo:
Listeria monocytogenes is a psychrotrophic food-borne pathogen that is problematic for the food industry because of its ubiquitous distribution in nature and its ability to grow at low temperatures and in the presence of high salt concentrations. Here we demonstrate that the process of adaptation to low temperature after cold shock includes elevated levels of cold shock proteins (CSPs) and that the levels of CSPs are also elevated after treatment with high hydrostatic pressure (HHP). Two-dimensional gel electrophoresis combined with Western blotting performed with anti-CspB of Bacillus subtilis was used to identify four 7-kDa proteins, designated Csp1, Csp2, Csp3, and Csp4. In addition, Southern blotting revealed four chromosomal DNA fragments that reacted with a csp probe, which also indicated that a CSP family is present in L. monocytogenes LO28. After a cold shock in which the temperature was decreased from 37°C to 10°C the levels of Csp1 and Csp3 increased 10- and 3.5-fold, respectively, but the levels of Csp2 and Csp4 were not elevated. Pressurization of L. monocytogenes LO28 cells resulted in 3.5- and 2-fold increases in the levels of Csp1 and Csp2, respectively. Strikingly, the level of survival after pressurization of cold-shocked cells was 100-fold higher than that of cells growing exponentially at 37°C. These findings imply that cold-shocked cells are protected from HHP treatment, which may affect the efficiency of combined preservation techniques.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.
Resumo:
While stirring and mixing properties in the stratosphere are reasonably well understood in the context of balanced (slow) dynamics, as is evidenced in numerous studies of chaotic advection, the strongly enhanced presence of high-frequency gravity waves in the mesosphere gives rise to a significant unbalanced (fast) component to the flow. The present investigation analyses result from two idealized shallow-water numerical simulations representative of stratospheric and mesospheric dynamics on a quasi-horizontal isentropic surface. A generalization of the Hua–Klein Eulerian diagnostic to divergent flow reveals that velocity gradients are strongly influenced by the unbalanced component of the flow. The Lagrangian diagnostic of patchiness nevertheless demonstrates the persistence of coherent features in the zonal component of the flow, in contrast to the destruction of coherent features in the meridional component. Single-particle statistics demonstrate t2 scaling for both the stratospheric and mesospheric regimes in the case of zonal dispersion, and distinctive scaling laws for the two regimes in the case of meridional dispersion. This is in contrast to two-particle statistics, which in the mesospheric (unbalanced) regime demonstrate a more rapid approach to Richardson’s t3 law in the case of zonal dispersion and is evidence of enhanced meridional dispersion.
Resumo:
This paper generalises and applies recently developed blocking diagnostics in a two- dimensional latitude-longitude context, which takes into consideration both mid- and high-latitude blocking. These diagnostics identify characteristics of the associated wave-breaking as seen in the potential temperature (θ) on the dynamical tropopause, in particular the cyclonic or anticyclonic Direction of wave-Breaking (DB index), and the Relative Intensity (RI index) of the air masses that contribute to blocking formation. The methodology is extended to a 2-D domain and a cluster technique is deployed to classify mid- and high-latitude blocking according to the wave-breaking characteristics. Mid-latitude blocking is observed over Europe and Asia, where the meridional gradient of θ is generally weak, whereas high-latitude blocking is mainly present over the oceans, to the north of the jet-stream, where the meridional gradient of θ is much stronger. They occur respectively on the equatorward and poleward flank of the jet- stream, where the horizontal shear ∂u/∂y is positive in the first case and negative in the second case. A regional analysis is also conducted. It is found that cold-anticyclonic and cyclonic blocking divert the storm-track respectively to the south and to the north over the East Atlantic and western Europe. Furthermore, warm-cyclonic blocking over the Pacific and cold-anticyclonic blocking over Europe are identified as the most persistent types and are associated with large amplitude anomalies in temperature and precipitation. Finally, the high-latitude, cyclonic events seem to correlate well with low- frequency modes of variability over the Pacific and Atlantic Ocean.
Resumo:
Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.
Resumo:
Simulations of polar ozone losses were performed using the three-dimensional high-resolution (1∘ × 1∘) chemical transport model MIMOSA-CHIM. Three Arctic winters 1999–2000, 2001–2002, 2002–2003 and three Antarctic winters 2001, 2002, and 2003 were considered for the study. The cumulative ozone loss in the Arctic winter 2002–2003 reached around 35% at 475 K inside the vortex, as compared to more than 60% in 1999–2000. During 1999–2000, denitrification induces a maximum of about 23% extra ozone loss at 475 K as compared to 17% in 2002–2003. Unlike these two colder Arctic winters, the 2001–2002 Arctic was warmer and did not experience much ozone loss. Sensitivity tests showed that the chosen resolution of 1∘ × 1∘ provides a better evaluation of ozone loss at the edge of the polar vortex in high solar zenith angle conditions. The simulation results for ozone, ClO, HNO3, N2O, and NO y for winters 1999–2000 and 2002–2003 were compared with measurements on board ER-2 and Geophysica aircraft respectively. Sensitivity tests showed that increasing heating rates calculated by the model by 50% and doubling the PSC (Polar Stratospheric Clouds) particle density (from 5 × 10−3 to 10−2 cm−3) refines the agreement with in situ ozone, N2O and NO y levels. In this configuration, simulated ClO levels are increased and are in better agreement with observations in January but are overestimated by about 20% in March. The use of the Burkholder et al. (1990) Cl2O2 absorption cross-sections slightly increases further ClO levels especially in high solar zenith angle conditions. Comparisons of the modelled ozone values with ozonesonde measurement in the Antarctic winter 2003 and with Polar Ozone and Aerosol Measurement III (POAM III) measurements in the Antarctic winters 2001 and 2002, shows that the simulations underestimate the ozone loss rate at the end of the ozone destruction period. A slightly better agreement is obtained with the use of Burkholder et al. (1990) Cl2O2 absorption cross-sections.
Resumo:
New representations and efficient calculation methods are derived for the problem of propagation from an infinite regularly spaced array of coherent line sources above a homogeneous impedance plane, and for the Green's function for sound propagation in the canyon formed by two infinitely high, parallel rigid or sound soft walls and an impedance ground surface. The infinite sum of source contributions is replaced by a finite sum and the remainder is expressed as a Laplace-type integral. A pole subtraction technique is used to remove poles in the integrand which lie near the path of integration, obtaining a smooth integrand, more suitable for numerical integration, and a specific numerical integration method is proposed. Numerical experiments show highly accurate results across the frequency spectrum for a range of ground surface types. It is expected that the methods proposed will prove useful in boundary element modeling of noise propagation in canyon streets and in ducts, and for problems of scattering by periodic surfaces.
Resumo:
Global FGGE data are used to investigate several aspects of large-scale turbulence in the atmosphere. The approach follows that for two-dimensional, nondivergent turbulent flows which are homogeneous and isotropic on the sphere. Spectra of kinetic energy, enstrophy and available potential energy are obtained for both the stationary and transient parts of the flow. Nonlinear interaction terms and fluxes of energy and enstrophy through wavenumber space are calculated and compared with the theory. A possible method of parameterizing the interactions with unresolved scales is considered. Two rather different flow regimes are found in wavenumber space. The high-wavenumber regime is dominated by the transient components of the flow and exhibits, at least approximately, several of the conditions characterizing homogeneous and isotropic turbulence. This region of wavenumber space also displays some of the features of an enstrophy-cascading inertial subrange. The low-wavenumber region, on the other hand, is dominated by the stationary component of the flow, exhibits marked anisotropy and, in contrast to the high-wavenumber regime, displays a marked change between January and July.
Resumo:
Neuroprostheses interfaced with transected peripheral nerves are technological routes to control robotic limbs as well as convey sensory feedback to patients suffering from traumatic neural injuries or degenerative diseases. To maximize the wealth of data obtained in recordings, interfacing devices are required to have intrafascicular resolution and provide high signal-to-noise ratio (SNR) recordings. In this paper, we focus on a possible building block of a three-dimensional regenerative implant: a polydimethylsiloxane (PDMS) microchannel electrode capable of highly sensitive recordings in vivo. The PDMS 'micro-cuff' consists of a 3.5 mm long (100 µm × 70 µm cross section) microfluidic channel equipped with five evaporated Ti/Au/Ti electrodes of sub-100 nm thickness. Individual electrodes have average impedance of 640 ± 30 kΩ with a phase angle of −58 ± 1 degrees at 1 kHz and survive demanding mechanical handling such as twisting and bending. In proof-of-principle acute implantation experiments in rats, surgically teased afferent nerve strands from the L5 dorsal root were threaded through the microchannel. Tactile stimulation of the skin was reliably monitored with the three inner electrodes in the device, simultaneously recording signal amplitudes of up to 50 µV under saline immersion. The overall SNR was approximately 4. A small but consistent time lag between the signals arriving at the three electrodes was observed and yields a fibre conduction velocity of 30 m s−1. The fidelity of the recordings was verified by placing the same nerve strand in oil and recording activity with hook electrodes. Our results show that PDMS microchannel electrodes open a promising technological path to 3D regenerative interfaces.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
The impact of ceiling geometries on the performance of lightshelves was investigated using physical model experiments and radiance simulations. Illuminance level and distribution uniformity were assessed for a working plane in a large space located in sub-tropical climate regions where innovative systems for daylighting and shading are required. It was found that the performance of the lightshelf can be improved by changing the ceiling geometry; the illuminance level increased in the rear of the room and decreased in the front near the window compared to rooms having conventional horizontal ceilings. Moreover, greater uniformity was achieved throughout the room as a result of reducing the difference in the illuminance level between the front and rear of the room. Radiance simulation results were found to be in good agreement with physical model data obtained under a clear sky and high solar radiation. The best ceiling shape was found to be one that is curved in the front and rear of the room.
Resumo:
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the UK Met Office Unified Model during the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms). The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real-time from the operational rainfall radar network. More than 1,000 storm observations gathered over fifteen days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid-length simulations are shown to produce horizontal structures a factor 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modelled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor 2 too wide and the convective cores 2 km too deep.
Resumo:
The passage of an electric current through graphite or few-layer graphene can result in a striking structural transformation, but there is disagreement about the precise nature of this process. Some workers have interpreted the phenomenon in terms of the sublimation and edge reconstruction of essentially flat graphitic structures. An alternative explanation is that the transformation actually involves a change from a flat to a three-dimensional structure. Here we describe detailed studies of carbon produced by the passage of a current through graphite which provide strong evidence that the transformed carbon is indeed three-dimensional. The evidence comes primarily from images obtained in the scanning transmission electron microscope using the technique of high-angle annular dark-field imaging, and from a detailed analysis of electron energy loss spectra. We discuss the possible mechanism of the transformation, and consider potential applications of “three-dimensional bilayer graphene”.
Resumo:
We propose and analyse a hybrid numerical–asymptotic hp boundary element method (BEM) for time-harmonic scattering of an incident plane wave by an arbitrary collinear array of sound-soft two-dimensional screens. Our method uses an approximation space enriched with oscillatory basis functions, chosen to capture the high-frequency asymptotics of the solution. We provide a rigorous frequency-explicit error analysis which proves that the method converges exponentially as the number of degrees of freedom N increases, and that to achieve any desired accuracy it is sufficient to increase N in proportion to the square of the logarithm of the frequency as the frequency increases (standard BEMs require N to increase at least linearly with frequency to retain accuracy). Our numerical results suggest that fixed accuracy can in fact be achieved at arbitrarily high frequencies with a frequency-independent computational cost, when the oscillatory integrals required for implementation are computed using Filon quadrature. We also show how our method can be applied to the complementary ‘breakwater’ problem of propagation through an aperture in an infinite sound-hard screen.