67 resultados para Instructional constraints
Resumo:
In finite difference time domain simulation of room acoustics, source functions are subject to various constraints. These depend on the way sources are injected into the grid and on the chosen parameters of the numerical scheme being used. This paper addresses the issue of selecting and designing sources for finite difference simulation, by first reviewing associated aims and constraints, and evaluating existing source models against these criteria. The process of exciting a model is generalized by introducing a system of three cascaded filters, respectively, characterizing the driving pulse, the source mechanics, and the injection of the resulting source function into the grid. It is shown that hard, soft, and transparent sources can be seen as special cases within this unified approach. Starting from the mechanics of a small pulsating sphere, a parametric source model is formulated by specifying suitable filters. This physically constrained source model is numerically consistent, does not scatter incoming waves, and is free from zero- and low-frequency artifacts. Simulation results are employed for comparison with existing source formulations in terms of meeting the spectral and temporal requirements on the outward propagating wave.
Resumo:
Hundsalm ice cave located at 1520 m altitude in a karst region of western Austria contains up to 7-m-thick deposits of snow, firn and congelation ice. Wood fragments exposed in the lower parts of an ice and firn wall were radiocarbon accelerator mass spectrometry (AMS) dated. Although the local stratigraphy is complex, the 19 individual dates - the largest currently available radiocarbon dataset for an Alpine ice cave - allow to place constraints on the accumulation and ablation history of the cave ice. Most of the cave was either ice free or contained only a small firn and ice body during the 'Roman Warm Period'; dates of three wood fragments mark the onset of firn and ice build-up in the 6th and 7th century ad. In the central part of the cave, the oldest samples date back to the 13th century and record ice growth coeval with the onset of the 'Little Ice Age'. The majority of the ice and firn deposit, albeit compromised by a disturbed stratigraphy, appears to have been formed during the subsequent centuries, supported by wood samples from the 15th to the 17th century. The oldest wood remains found so far inside the ice is from the end of the Bronze Age and implies that local relics of prehistoric ice may be preserved in this cave. The wood record from Hundsalm ice cave shows parallels to the Alpine glacier history of the last three millennia, for example, the lack of preserved wood remains during periods of known glacier minima, and underscores the potential of firn and ice in karst cavities as a long-term palaeoclimate archive, which has been degrading at an alarming rate in recent years. © The Author(s) 2013.
Resumo:
We consider a collision-sensitive secondary system that intends to opportunistically aggregate and utilize spectrum of a primary system to achieve higher data rates. In such opportunistic spectrum access, secondary transmission can collide with primary transmission. When the secondary system aggregates more channels for data transmission, more frequent collisions may occur, limiting the performance obtained by the opportunistic spectrum aggregation. In this context, dynamic spectrum aggregation problem is formulated to maximize the ergodic channel capacity under the constraint of collision tolerable level. To solve the problem, we develop the optimal spectrum aggregation approach, deriving closed-form expressions for the collision probability in terms of primary user traffic load, secondary user transmission interval, and the random number of sub-channels aggregated. Our results show that aggregating only a subset of sub-channels will be a better choice, depending on the ratio of collision sensitivity requirement to the primary user traffic.
Resumo:
Previous studies on work instruction delivery for complex assembly tasks have shown that the mode and delivery method for the instructions in an engineering context can influence both build time and product quality. The benefits of digital, animated instructional formats when compared to static pictures and text only formats have already been demonstrated. Although pictograms have found applications for relatively straight forward operations and activities, their applicability to relatively complex assembly tasks has yet to be demonstrated. This study compares animated instructions and pictograms for the assembly of an aircraft panel. Based around a series of build experiments, the work records build time as well as the number of media references to measure and compare build efficiency. The number of build errors and the time required to correct them is also recorded. The experiments included five participants completing five builds over five consecutive days for each media type. Results showed that on average the total build time was 13.1% lower for the group using animated instructions. The benefit of animated instructions on build time was most prominent in the first three builds, by build four this benefit had disappeared. There were a similar number of instructional references for the two groups over the five builds but the pictogram users required a lot more references during build 1. There were more errors among the group using pictograms requiring more time for corrections during the build.
Resumo:
We have calculated 90% confidence limits on the steady-state rate of catastrophic disruptions of main belt asteroids in terms of the absolute magnitude at which one catastrophic disruption occurs per year as a function of the post-disruption increase in brightness (Δm) and subsequent brightness decay rate (τ ). The confidence limits were calculated using the brightest unknown main belt asteroid (V=18.5) detected with the Pan-STARRS1 (Pan-STARRS1) telescope. We measured the Pan-STARRS1’s catastrophic disruption detection efficiency over a 453-day interval using the Pan-STARRS moving object processing system (MOPS) and a simple model for the catastrophic disruption event’s photometric behavior in a small aperture centered on the catastrophic disruption event. We then calculated the contours in the ranges from and encompassing measured values from known cratering and disruption events and our model’s predictions. Our simplistic catastrophic disruption model suggests that and which would imply that H0≳28—strongly inconsistent withH0,B2005=23.26±0.02 predicted by Bottke et al. (Bottke, W.F., Durda, D.D., Nesvorný, D., Jedicke, R., Morbidelli, A., Vokrouhlický, D., Levison, H.F. [2005]. Icarus, 179, 63–94.) using purely collisional models. However, if we assume that H0=H0,B2005 our results constrain , inconsistent with our simplistic impact-generated catastrophic disruption model. We postulate that the solution to the discrepancy is that >99% of main belt catastrophic disruptions in the size range to which this study was sensitive (∼100 m) are not impact-generated, but are instead due to fainter rotational breakups, of which the recent discoveries of disrupted asteroids P/2013 P5 and P/2013 R3 are probable examples. We estimate that current and upcoming asteroid surveys may discover up to 10 catastrophic disruptions/year brighter than V=18.5.
Resumo:
Considering the development of aerospace composite components, designing for reduced manufacturing layup cost and structural complexity is increasingly important. While the advantage of composite materials is the ability to tailor designs to various structural loads for minimum mass, the challenge is obtaining a design that is manufacturable and minimizes local ply incompatibility. The focus of the presented research is understanding how the relationships between mass, manufacturability and design complexity, under realistic loads and design requirements, can be affected by enforcing ply continuity in the design process. Presented are a series of sizing case studies on an upper wing cover, designed using conventional analyses and the tabular laminate design process. Introducing skin ply continuity constraints can generate skin designs with minimal ply discontinuities, fewer ply drops and larger ply areas than designs not constrained for continuity. However, the reduced design freedom associated with the addition of these constraints results in a weight penalty over the total wing cover. Perhaps more interestingly, when considering manual hand layup the reduced design complexity is not translated into a reduced recurring manufacturing cost. In contrast, heavier wing cover designs appear to take more time to layup regardless of the laminate design complexity. © 2012 AIAA.
Resumo:
Microbial habitats that contain an excess of carbohydrate in the form of sugar are widespread in the microbial biosphere. Depending on the type of sugar, prevailing water activity and other substances present, sugar-rich environments can be highly dynamic or relatively stable, osmotically stressful, and/or destabilizing for macromolecular systems, and can thereby strongly impact the microbial ecology. Here, we review the microbiology of different high-sugar habitats, including their microbial diversity and physicochemical parameters, which act to impact microbial community assembly and constrain the ecosystem. Saturated sugar beet juice and floral nectar are used as case studies to explore the differences between the microbial ecologies of low and higher water-activity habitats respectively. Nectar is a paradigm of an open, dynamic and biodiverse habitat populated by many microbial taxa, often yeasts and bacteria such as, amongst many others, Metschnikowia spp. and Acinetobacter spp., respectively. By contrast, thick juice is a relatively stable, species-poor habitat and is typically dominated by a single, xerotolerant bacterium (Tetragenococcus halophilus). A number of high-sugar habitats contain chaotropic solutes (e.g. ethyl acetate, phenols, ethanol, fructose and glycerol) and hydrophobic stressors (e.g. ethyl octanoate, hexane, octanol and isoamyl acetate), all of which can induce chaotropicity-mediated stresses that inhibit or prevent multiplication of microbes. Additionally, temperature, pH, nutrition, microbial dispersion and habitat history can determine or constrain the microbiology of high-sugar milieux. Findings are discussed in relation to a number of unanswered scientific questions.
Resumo:
For some time, the satisfiability formulae that have been the most difficult to solve for their size have been crafted to be unsatisfiable by the use of cardinality constraints. Recent solvers have introduced explicit checking of such constraints, rendering previously difficult formulae trivial to solve. A family of unsatisfiable formulae is described that is derived from the sgen4 family but cannot be solved using cardinality constraints detection and reasoning alone. These formulae were found to be the most difficult during the SAT2014 competition by a significant margin and include the shortest unsolved benchmark in the competition, sgen6-1200-5-1.cnf.
Resumo:
This paper addresses the problem of learning Bayesian network structures from data based on score functions that are decomposable. It describes properties that strongly reduce the time and memory costs of many known methods without losing global optimality guarantees. These properties are derived for different score criteria such as Minimum Description Length (or Bayesian Information Criterion), Akaike Information Criterion and Bayesian Dirichlet Criterion. Then a branch-and-bound algorithm is presented that integrates structural constraints with data in a way to guarantee global optimality. As an example, structural constraints are used to map the problem of structure learning in Dynamic Bayesian networks into a corresponding augmented Bayesian network. Finally, we show empirically the benefits of using the properties with state-of-the-art methods and with the new algorithm, which is able to handle larger data sets than before.
Resumo:
This paper reports the progress made at JET-ILW on integrating the requirements of the reference ITER baseline scenario with normalized confinement factor of 1, at a normalized pressure of 1.8 together with partially detached divertor whilst maintaining these conditions over many energy confinement times. The 2.5 MA high triangularity ELMy H-modes are studied with two different divertor configurations with D-gas injection and nitrogen seeding. The power load reduction with N seeding is reported. The relationship between an increase in energy confinement and pedestal pressure with triangularity is investigated. The operational space of both plasma configurations is studied together with the ELM energy losses and stability of the pedestal of unseeded and seeded plasmas. The achievement of stationary plasma conditions over many energy confinement times is also reported.
Resumo:
Over the last 15 years, the supernova community has endeavoured to directly identify progenitor stars for core-collapse supernovae discovered in nearby galaxies. These precursors are often visible as resolved stars in high-resolution images from space-and ground-based telescopes. The discovery rate of progenitor stars is limited by the local supernova rate and the availability and depth of archive images of galaxies, with 18 detections of precursor objects and 27 upper limits. This review compiles these results (from 1999 to 2013) in a distance-limited sample and discusses the implications of the findings. The vast majority of the detections of progenitor stars are of type II-P, II-L, or IIb with one type Ib progenitor system detected and many more upper limits for progenitors of Ibc supernovae (14 in all). The data for these 45 supernovae progenitors illustrate a remarkable deficit of high-luminosity stars above an apparent limit of log L/L-circle dot similar or equal to 5.1 dex. For a typical Salpeter initial mass function, one would expect to have found 13 high-luminosity and high-mass progenitors by now. There is, possibly, only one object in this time-and volume-limited sample that is unambiguously high-mass (the progenitor of SN2009ip) although the nature of that supernovae is still debated. The possible biases due to the influence of circumstellar dust, the luminosity analysis, and sample selection methods are reviewed. It does not appear likely that these can explain the missing high-mass progenitor stars. This review concludes that the community's work to date shows that the observed populations of supernovae in the local Universe are not, on the whole, produced by high-mass (M greater than or similar to 18 M-circle dot) stars. Theoretical explosions of model stars also predict that black hole formation and failed supernovae tend to occur above an initial mass of M similar or equal to 18 M-circle dot. The models also suggest there is no simple single mass division for neutron star or black-hole formation and that there are islands of explodability for stars in the 8-120 M-circle dot range. The observational constraints are quite consistent with the bulk of stars above M similar or equal to 18 M-circle dot collapsing to form black holes with no visible supernovae.