970 resultados para Lot-sizing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Initial sizing procedures for aircraft stiffened panels that include the influence of welding fabrication residual process effects are missing. Herein, experimental and Finite Element analyses are coupled to generate knowledge to formulate an accurate and computationally efficient sizing procedure which will enable designers to routinely consider panel fabrication, via welding, accounting for the complex distortions and stresses induced by this manufacturing process. Validating experimental results demonstrate the need to consider welding induced material property degradation, residual stresses and distortions, as these can reduce static strength performance. However, results from fuselage and wing trade-studies, using the validated sizing procedure, establish that these potential reductions in strength performance may be overcome through local geometric tailoring during initial sizing, negating any weight penalty for the majority of design scenarios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The collisionally excited transient inversion scheme is shown to produce exceptionally high gain coefficients and gain-length products. Data are presented for the Ne-Like titanium and germanium and Ni-like silver X-ray lasers (XRL's) pumped using a combination of nanosecond and picosecond duration laser pulses. This method leads to a dramatic reduction of the required pump energy and makes down-sizing of XRL's possible, an important prerequisite if they are to become commonly used tools in the Long-term.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a dynamic reordering superscalar processor, the front-end fetches instructions and places them in the issue queue. Instructions are then issued by the back-end execution core. Till recently, the front-end was designed to maximize performance without considering energy consumption. The front-end fetches instructions as fast as it can until it is stalled by a filled issue queue or some other blocking structure. This approach wastes energy: (i) speculative execution causes many wrong-path instructions to be fetched and executed, and (ii) back-end execution rate is usually less than its peak rate, but front-end structures are dimensioned to sustained peak performance. Dynamically reducing the front-end instruction rate and the active size of front-end structure (e.g. issue queue) is a required performance-energy trade-off. Techniques proposed in the literature attack only one of these effects.
In previous work, we have proposed Speculative Instruction Window Weighting (SIWW) [21], a fetch gating technique that allows to address both fetch gating and instruction issue queue dynamic sizing. SIWW computes a global weight on the set of inflight instructions. This weight depends on the number and types of inflight instructions (non-branches, high confidence or low confidence branches, ...). The front-end instruction rate can be continuously adapted based on this weight. This paper extends the analysis of SIWW performed in previous work. It shows that SIWW performs better than previously proposed fetch gating techniques and that SIWW allows to dynamically adapt the size of the active instruction queue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a novel approach to automatically sub-divide a complex geometry and apply an efficient mesh is presented. Following the identification and removal of thin-sheet regions from an arbitrary solid using the thick/thin decomposition approach developed by Robinson et al. [1], the technique here employs shape metrics generated using local sizing measures to identify long-slender regions within the thick body. A series of algorithms automatically partition the thick region into a non-manifold assembly of long-slender and complex sub-regions. A structured anisotropic mesh is applied to the thin-sheet and long-slender bodies, and the remaining complex bodies are filled with unstructured isotropic tetrahedra. The resulting semi-structured mesh possesses significantly fewer degrees of freedom than the equivalent unstructured mesh, demonstrating the effectiveness of the approach. The accuracy of the efficient meshes generated for a complex geometry is verified via a study that compares the results of a modal analysis with the results of an equivalent analysis on a dense tetrahedral mesh.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Property lawyers are generally viewed as a serious lot, not prone to feverish bursts of excitement as we seek comfort and solace in established legal rules and precepts. In the same way, property law disputes tend to have a fairly low profile and fail to capture the public imagination in the same way as, for example, those involving criminal or human rights law. Such apparent indifference might seem a little strange, given the centrality of property in everyday human life and the significance which legal systems and individuals attach to property rights. However, there is one issue which always inflames passions amongst lawyers and non-lawyers alike: the acquisition of land through the doctrine of adverse possession, often described as ‘squatter’s rights’. No property-related topic is likely to light up a radio show phone-in switchboard quite like squatting

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the UN report by the Brundtland Committee, sustainability in the built environment has mainly been seen from a technical focus on single buildings or products. With the energy efficiency approaching 100%, fossil resources depleting and a considerable part of the world still in need of better prosperity, the playing field of a technical focus has become very limited. It will most probably not lead to the sustainable development needed to avoid irreversible effects on climate, energy provision and, not least, society.
Cities are complex structures of independently functioning elements, all of which are nevertheless connected to different forms of infrastructure, which provide the necessary sources or solve the release of waste material. With the current ambitions regarding carbon- or energy-neutrality, retreating again to the scale of a building is likely to fail. Within an urban context a single building cannot become fully resource-independent, and need not, from our viewpoint. Cities should be considered as an organism that has the ability to intelligently exchange sources and waste flows. Especially in terms of energy, it can be made clear that the present situation in most cities are undesired: there is simultaneous demand for heat and cold, and in summer a lot of excess energy is lost, which needs to be produced again in winter. The solution for this is a system that intelligently exchanges and stores essential sources, e.g. energy, and that optimally utilises waste flows.
This new approach will be discussed and exemplified. The Rotterdam Energy Approach and Planning (REAP) will be illustrated as a means for urban planning, whereas Swarm Planning will be introduced as another nature-based principle for swift changes towards sustainability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless sensor node platforms are very diversified and very constrained, particularly in power consumption. When choosing or sizing a platform for a given application, it is necessary to be able to evaluate in an early design stage the impact of those choices. Applied to the computing platform implemented on the sensor node, it requires a good understanding of the workload it must perform. Nevertheless, this workload is highly application-dependent. It depends on the data sampling frequency together with application-specific data processing and management. It is thus necessary to have a model that can represent the workload of applications with various needs and characteristics. In this paper, we propose a workload model for wireless sensor node computing platforms. This model is based on a synthetic application that models the different computational tasks that the computing platform will perform to process sensor data. It allows to model the workload of various different applications by tuning data sampling rate and processing. A case study is performed by modeling different applications and by showing how it can be used for workload characterization. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Frustration – the inability to simultaneously satisfy all interactions – occurs in a wide range of systems including neural networks, water ice and magnetic systems. An example of the latter is the so called spin-ice in pyrochlore materials [1] which have attracted a lot of interest not least due to the emergence of magnetic monopole defects when the ‘ice rules’ governing the local ordering breaks down [2]. However it is not possible to directly measure the frustrated property – the direction of the magnetic moments – in such spin ice systems with current experimental techniques. This problem can be solved by instead studying artificial spin-ice systems where the molecular magnetic moments are replaced by nanoscale ferromagnetic islands [3-8]. Two different arrangements of the ferromagnetic islands have been shown to exhibit spin ice behaviour: a square lattice maintaining four moments at each vertex [3,8] and the Kagome lattice which has only three moments per vertex but equivalent interactions between them [4-7]. Magnetic monopole defects have been observed in both types of lattices [7-8]. One of the challenges when studying these artificial spin-ice systems is that it is difficult to arrive at the fully demagnetised ground-state [6-8].
Here we present a study of the switching behaviour of building blocks of the Kagome lattice influenced by the termination of the lattice. Ferromagnetic islands of nominal size 1000 nm by 100 nm were fabricated in five island blocks using electron-beam lithography and lift-off techniques of evaporated 18 nm Permalloy (Ni80Fe20) films. Each block consists of a central island with four arms terminated by a different number and placement of ‘injection pads’, see Figure 1. The islands are single domain and magnetised along their long axis. The structures were grown on a 50 nm thick electron transparent silicon nitride membrane to allow TEM observation, which was back-coated with a 5 nm film of Au to prevent charge build-up during the TEM experiments.
To study the switching behaviour the sample was subjected to a magnetic field strong enough to magnetise all the blocks in one direction, see Figure 1. Each block obeys the Kagome lattice ‘ice-rules’ of “2-in, 1-out” or “1-in, 2-out” in this fully magnetised state. Fresnel mode Lorentz TEM images of the sample were then recorded as a magnetic field of increasing magnitude was applied in the opposite direction. While the Fresnel mode is normally used to image magnetic domain structures [9] for these types of samples it is possible to deduce the direction of the magnetisation from the Lorentz contrast [5]. All images were recorded at the same over-focus judged to give good Lorentz contrast.
The magnetisation was found to switch at different magnitudes of the applied field for nominally identical blocks. However, trends could still be identified: all the blocks with any injection pads, regardless of placement and number, switched the direction of the magnetisation of their central island at significantly smaller magnitudes of the applied magnetic field than the blocks without injection pads. It can therefore be concluded that the addition of an injection pad lowers the energy barrier to switching the connected island, acting as a nucleation site for monopole defects. In these five island blocks the defects immediately propagate through to the other side, but in a larger lattice the monopoles could potentially become trapped at a vertex and observed [10].
References

[1] M J Harris et al, Phys Rev Lett 79 (1997) p.2554.
[2] C Castelnovo, R Moessner and S L Sondhi, Nature 451 (2008) p. 42.
[3] R F Wang et al, Nature 439 (2006) 303.
[4] M Tanaka et al, Phys Rev B 73 (2006) 052411.
[5] Y Qi, T Brintlinger and J Cumings, Phys Rev B 77 (2008) 094418.
[6] E Mengotti et al, Phys Rev B 78 (2008) 144402.
[7] S Ladak et al, Nature Phys 6 (2010) 359.
[8] C Phatak et al, Phys Rev B 83 (2011) 174431.
[9] J N Chapman, J Phys D 17 (1984) 623.
[10] The authors gratefully acknowledge funding from the EPSRC under grant number EP/D063329/1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phylogenetic analysis of the sequence of the H gene of 75 measles virus (MV) strains (32 published and 43 new sequences) was carried out. The lineage groups described from comparison of the nucleotide sequences encoding the C-terminal regions of the N protein of MV were the same as those derived from the H gene sequences in almost all cases. The databases document a number of distinct genotype switches that have occurred in Madrid (Spain). Well-documented is the complete replacement of lineage group C2, the common European genotype at that time, with that of group D3 around the autumn of 1993. No further isolations of group C2 took place in Madrid after this time. The rate of mutation of the H gene sequences of MV genotype D3 circulating in Madrid from 1993 to 1996 was very low (5 x 10(-4) per annum for a given nucleotide position). This is an order of magnitude lower than the rates of mutation observed in the HN genes of human influenza A viruses. The ratio of expressed over silent mutations indicated that the divergence was not driven by immune selection in this gene. Variations in amino acid 117 of the H protein (F or L) may be related to the ability of some strains to haemagglutinate only in the presence of salt. Adaptation of MV to different primate cell types was associated with very small numbers of mutations in the H gene. The changes could not be predicted when virus previously grown in human B cell lines was adapted to monkey Vero cells. In contrast, rodent brain-adapted viruses displayed a lot of amino acid sequence variation from normal MV strains. There was no convincing evidence for recombination between MV genotypes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The potential that laser based particle accelerators offer to solve sizing and cost issues arising with conventional proton therapy has generated great interest in the understanding and development of laser ion acceleration, and in investigating the radiobiological effects induced by laser accelerated ions. Laser-driven ions are produced in bursts of ultra-short duration resulting in ultra-high dose rates, and an investigation at Queen's University Belfast was carried out to investigate this virtually unexplored regime of cell rdaiobiology. This employed the TARANIS terawatt laser producing protons in the MeV range for proton irradiation, with dose rates exceeding 10 Gys on a single exposure. A clonogenic assay was implemented to analyse the biological effect of proton irradiation on V79 cells, which, when compared to data obtained with the same cell line irradiated with conventionally accelerated protons, was found to show no significant difference. A Relative Biological effectiveness of 1.4±0.2 at 10 % Survival Fraction was estimated from a comparison with a 225 kVp X-ray source. © 2013 SPIE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recovery of cellulose fibres from paper mill effluent has been studied using common polysaccharides or biopolymers such as Guar gum, Xanthan gum and Locust bean gum as flocculent. Guar gum is commonly used in sizing paper and routinely used in paper making. The results have been compared with the performance of alum, which is a common coagulant and a key ingredient of the paper industry. Guar gum recovered about 3.86 mg/L of fibre and was most effective among the biopolymers. Settling velocity distribution curves demonstrated that Guar gum was able to settle the fibres faster than the other biopolymers; however, alum displayed the highest particle removal rate than all the biopolymers at any of the settling velocities. Alum, Guar gum, Xanthan gum and Locust bean gum removed 97.46%, 94.68%, 92.39% and 92.46% turbidity of raw effluent at a settling velocity of 0.5 cm/min, respectively. The conditions for obtaining the lowest sludge volume index such as pH, dose and mixing speed were optimised for guar gum which was the most effective among the biopolymers. Response surface methodology was used to design all experiments, and an optimum operational setting was proposed. The test results indicate similar performance of alum and Guar gum in terms of floc settling velocities and sludge volume index. Since Guar gum is a plant derived natural substance, it is environmentally benign and offers a green treatment option to the paper mills for pulp recycling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parallel robot (PR) is a mechanical system that utilized multiple computer-controlled limbs to support one common platform or end effector. Comparing to a serial robot, a PR generally has higher precision and dynamic performance and, therefore, can be applied to many applications. The PR research has attracted a lot of attention in the last three decades, but there are still many challenging issues to be solved before achieving PRs’ full potential. This chapter introduces the state-of-the-art PRs in the aspects of synthesis, design, analysis, and control. The future directions will also be discussed at the end.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies on work instruction delivery for complex assembly tasks have shown that the mode and delivery method for the instructions in an engineering context can influence both build time and product quality. The benefits of digital, animated instructional formats when compared to static pictures and text only formats have already been demonstrated. Although pictograms have found applications for relatively straight forward operations and activities, their applicability to relatively complex assembly tasks has yet to be demonstrated. This study compares animated instructions and pictograms for the assembly of an aircraft panel. Based around a series of build experiments, the work records build time as well as the number of media references to measure and compare build efficiency. The number of build errors and the time required to correct them is also recorded. The experiments included five participants completing five builds over five consecutive days for each media type. Results showed that on average the total build time was 13.1% lower for the group using animated instructions. The benefit of animated instructions on build time was most prominent in the first three builds, by build four this benefit had disappeared. There were a similar number of instructional references for the two groups over the five builds but the pictogram users required a lot more references during build 1. There were more errors among the group using pictograms requiring more time for corrections during the build.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study describes an innovative monolith structure designed for applications in automotive catalysis using an advanced manufacturing approach developed at Imperial College London. The production process combines extrusion with phase inversion of a ceramic-polymer-solvent mixture in order to design highly ordered substrate micro-structures that offer improvements in performance, including reduced PGM loading, reduced catalyst ageing and reduced backpressure.

This study compares the performance of the novel substrate for CO oxidation against commercially available 400 cpsi and 900 cpsi catalysts using gas concentrations and a flow rate equivalent to those experienced by a full catalyst brick when attached to a vehicle. Due to the novel micro-structure, no washcoat was required for the initial testing and 13 g/ft3 of Pd was deposited directly throughout the substrate structure in the absence of a washcoat.

Initial results for CO oxidation indicate that the advanced micro-structure leads to enhanced conversion efficiency. Despite an 79% reduction in metal loading and the absence of a washcoat, the novel substrate sample performs well, with a light-off temperature (LOT) only 15 °C higher than the commercial 400 cpsi sample.

To test the effects of catalyst ageing on light-off temperature, each sample was aged statically at a temperature of 1000 °C, based on the Bench Ageing Time (BAT) equation. The novel substrate performed impressively when compared to the commercial samples, with a variation in light-off temperature of only 3% after 80 equivalent hours of ageing, compared to 12% and 25% for the 400 cpsi and 900 cpsi monoliths, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.