51 resultados para Lot sizing
Resumo:
Recovery of cellulose fibres from paper mill effluent has been studied using common polysaccharides or biopolymers such as Guar gum, Xanthan gum and Locust bean gum as flocculent. Guar gum is commonly used in sizing paper and routinely used in paper making. The results have been compared with the performance of alum, which is a common coagulant and a key ingredient of the paper industry. Guar gum recovered about 3.86 mg/L of fibre and was most effective among the biopolymers. Settling velocity distribution curves demonstrated that Guar gum was able to settle the fibres faster than the other biopolymers; however, alum displayed the highest particle removal rate than all the biopolymers at any of the settling velocities. Alum, Guar gum, Xanthan gum and Locust bean gum removed 97.46%, 94.68%, 92.39% and 92.46% turbidity of raw effluent at a settling velocity of 0.5 cm/min, respectively. The conditions for obtaining the lowest sludge volume index such as pH, dose and mixing speed were optimised for guar gum which was the most effective among the biopolymers. Response surface methodology was used to design all experiments, and an optimum operational setting was proposed. The test results indicate similar performance of alum and Guar gum in terms of floc settling velocities and sludge volume index. Since Guar gum is a plant derived natural substance, it is environmentally benign and offers a green treatment option to the paper mills for pulp recycling.
Resumo:
Parallel robot (PR) is a mechanical system that utilized multiple computer-controlled limbs to support one common platform or end effector. Comparing to a serial robot, a PR generally has higher precision and dynamic performance and, therefore, can be applied to many applications. The PR research has attracted a lot of attention in the last three decades, but there are still many challenging issues to be solved before achieving PRs’ full potential. This chapter introduces the state-of-the-art PRs in the aspects of synthesis, design, analysis, and control. The future directions will also be discussed at the end.
Resumo:
Previous studies on work instruction delivery for complex assembly tasks have shown that the mode and delivery method for the instructions in an engineering context can influence both build time and product quality. The benefits of digital, animated instructional formats when compared to static pictures and text only formats have already been demonstrated. Although pictograms have found applications for relatively straight forward operations and activities, their applicability to relatively complex assembly tasks has yet to be demonstrated. This study compares animated instructions and pictograms for the assembly of an aircraft panel. Based around a series of build experiments, the work records build time as well as the number of media references to measure and compare build efficiency. The number of build errors and the time required to correct them is also recorded. The experiments included five participants completing five builds over five consecutive days for each media type. Results showed that on average the total build time was 13.1% lower for the group using animated instructions. The benefit of animated instructions on build time was most prominent in the first three builds, by build four this benefit had disappeared. There were a similar number of instructional references for the two groups over the five builds but the pictogram users required a lot more references during build 1. There were more errors among the group using pictograms requiring more time for corrections during the build.
Resumo:
This study describes an innovative monolith structure designed for applications in automotive catalysis using an advanced manufacturing approach developed at Imperial College London. The production process combines extrusion with phase inversion of a ceramic-polymer-solvent mixture in order to design highly ordered substrate micro-structures that offer improvements in performance, including reduced PGM loading, reduced catalyst ageing and reduced backpressure.
This study compares the performance of the novel substrate for CO oxidation against commercially available 400 cpsi and 900 cpsi catalysts using gas concentrations and a flow rate equivalent to those experienced by a full catalyst brick when attached to a vehicle. Due to the novel micro-structure, no washcoat was required for the initial testing and 13 g/ft3 of Pd was deposited directly throughout the substrate structure in the absence of a washcoat.
Initial results for CO oxidation indicate that the advanced micro-structure leads to enhanced conversion efficiency. Despite an 79% reduction in metal loading and the absence of a washcoat, the novel substrate sample performs well, with a light-off temperature (LOT) only 15 °C higher than the commercial 400 cpsi sample.
To test the effects of catalyst ageing on light-off temperature, each sample was aged statically at a temperature of 1000 °C, based on the Bench Ageing Time (BAT) equation. The novel substrate performed impressively when compared to the commercial samples, with a variation in light-off temperature of only 3% after 80 equivalent hours of ageing, compared to 12% and 25% for the 400 cpsi and 900 cpsi monoliths, respectively.
Resumo:
Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.
Resumo:
Much recent scholarship has been critical of the concept of a Dál Riatic migration to, or colonisation of, Argyll. Scepticism of the accuracy of the early medieval accounts of this population movement, arguing that these are late amendments to early sources, coupled with an apparent lack of archaeological evidence for such a migration have led to its rejection. It is argued here, however, that this rejection has been based on too narrow a reading of historical sources and that there are several early accounts which, while differing in detail, agree on one point of substance, that the origin of Scottish Dál Riata lies in Ireland. Also, the use of archaeological evidence to suggest no migration to Argyll by the Dál Riata is flawed, misunderstanding the nature of early migrations and how they might be archaeologically identified, and it's proposed that there is actually quite a lot of evidence for migration to Argyll by the Dál Riata, in the form of settlement and artefactural evidence, but that it is to be found in Ireland through the mechanism of counterstream migration, rather than in Scotland.
Resumo:
Considering the development of aerospace composite components, designing for reduced manufacturing layup cost and structural complexity is increasingly important. While the advantage of composite materials is the ability to tailor designs to various structural loads for minimum mass, the challenge is obtaining a design that is manufacturable and minimizes local ply incompatibility. The focus of the presented research is understanding how the relationships between mass, manufacturability and design complexity, under realistic loads and design requirements, can be affected by enforcing ply continuity in the design process. Presented are a series of sizing case studies on an upper wing cover, designed using conventional analyses and the tabular laminate design process. Introducing skin ply continuity constraints can generate skin designs with minimal ply discontinuities, fewer ply drops and larger ply areas than designs not constrained for continuity. However, the reduced design freedom associated with the addition of these constraints results in a weight penalty over the total wing cover. Perhaps more interestingly, when considering manual hand layup the reduced design complexity is not translated into a reduced recurring manufacturing cost. In contrast, heavier wing cover designs appear to take more time to layup regardless of the laminate design complexity. © 2012 AIAA.
Resumo:
In existing WiFi-based localization methods, smart mobile devices consume quite a lot of power as WiFi interfaces need to be used for frequent AP scanning during the localization process. In this work, we design an energy-efficient indoor localization system called ZigBee assisted indoor localization (ZIL) based on WiFi fingerprints via ZigBee interference signatures. ZIL uses ZigBee interfaces to collect mixed WiFi signals, which include non-periodic WiFi data and periodic beacon signals. However, WiFi APs cannot be identified from these WiFi signals by ZigBee interfaces directly. To address this issue, we propose a method for detecting WiFi APs to form WiFi fingerprints from the signals collected by ZigBee interfaces. We propose a novel fingerprint matching algorithm to align a pair of fingerprints effectively. To improve the localization accuracy, we design the K-nearest neighbor (KNN) method with three different weighted distances and find that the KNN algorithm with the Manhattan distance performs best. Experiments show that ZIL can achieve the localization accuracy of 87%, which is competitive compared to state-of-the-art WiFi fingerprint-based approaches, and save energy by 68% on average compared to the approach based on WiFi interface.
Resumo:
Soil aggregation has received a lot of attention in the last years; however, the focus was mostly on soil microorganismsor larger soil fauna, especially earthworms. The impact of the large group of microarthropods, e.g. Collembola and Acari, is nearly unknown and hence underrepresented in the literature. Here we propose and discuss potential direct and indirect mechanisms of how microarthropods could influence this process with the focus on collembolans, which are in general a relatively well studied taxon.Indirect mechanisms are likely to have larger impacts on soil aggregation than direct effects. The variety of indirect mechanisms based on the provision of organic material like faecal pellets, molts and necromass as food source for microorganisms is high and given available evidence we propose that these mechanismsare the most influential. We highlight the need for overcoming the challenges of culturing and handling of these animals in order to be able to design small scale experiments and field studies which would enable us to understand the role of the different
functional groups, their interaction with other soil faunaand the impact of land use practices on soil aggregation.
Resumo:
Stephen B. Dobranski, Milton Quarterly 49.3 (October, 2015), 181-4:
'By addressing classical and neo-Latin works with which Milton's poems appear to engage, Haan has pursued something unattempted yet. Her erudite and engaging commentary on the Poemata is the most extensive and impressive that I have encountered in any edition ... Haan's discussion of Milton's Poemata - including the Testimonia, the one Italian and four Latin encomia by the poet's acquaintances published in 1645 and 1673 - is remarkably detailed and well-researched. In these sections, readers learn, for example, how Milton's Epitaphium Damonis borrows from both classical writers (Theocritus, Moschus) and contemporary models (Castiglione, Zanchi) while transcending all of them through a pattern of resurrection motifs. Or, readers can discover affinities between Milton's lament on the death of the Bishop of Ely and a poem by the Italian humanist Hieronymo Aleander, Jr., or learn about the connections between Milton's Elegia Quinta and George Buchanan's Maiae Calendae ... The Shorter Poems is a scholarly achievement of the highest order.'
Noam Reisner, Review of English Studies 65 (2014), 744-5:
‘Haan shines with her Neo-Latinist expertise by offering a vivid separate introduction to the Latin poems, which sets up Milton’s poemata specifically within the Neo-Latin contexts of the seventeenth century, thereby dispelling any remaining view of these poems as juvenilia (a view which results from reading the poems chronologically). … The present volume will instantly establish itself as the definitive resource for any reader interested in Milton’s shorter poems, and it is scarcely imaginable that it will ever be eclipsed or be in need of replacing. Its contribution is important in all areas, especially in providing for the first time in a single volume truly valuable documents which can teach us a lot more about Milton’s poetic development than simply reading the poems in chronological sequence. But perhaps, this edition’s greatest achievement is the way in which it succeeds in giving Milton’s Latin poems the pride of place they have long deserved as fully integral to Milton’s complete poetic imagination. Haan’s specific achievement in this regard is less in updating the translations than in providing a different context through which to look at the Latin poems themselves. Haan’s detailed commentaries set the Latin poems in a completely fresh light which looks beyond the obvious classical references and allusions, noted by Carey and many other editors, to Milton’s complex engagement with the Neo-Latin literary culture of his time. It is this aspect of the volume, more than anything else, which vindicates its essentialness.'
Resumo:
The republican idea of non-domination stresses the importance of certain social relationships for a person’s freedom, showing that freedom is a social-relational state. While the idea of freedom as non-domination receives a lot of attention in the literature, republican theorists say surprisingly little about equality. Therefore, the aim of this paper is to carve out the contours of a republican conception of equality.. In so doing, I will argue that republican accounts of equality share a significant normative overlap with the idea of social equality. However, closer analysis of Philip Pettit’s account of ‘expressive egalitarianism’ (which Pettit sees as inherently connected to non-domination) and recent theories of social equality shows that republican non-domination – in contrast to what Pettit seems to claim – is not sufficient for securing (republican) social equality. In order to secure social equality for all, republicans would have to go beyond non-domination.
Resumo:
The characterization of complex cellular responses to diverse stimuli can be studied by the use of emerging chip-based technologies.
The p53 pathway is critical to maintaining the integrity of the genome in multicellular organisms. The p53gene is activated in response to DNA damage and encodes a transcription factor [1], which in turn activates genes that arrest cell growth and induce apoptosis, thereby preventing the propagation of genetically damaged cells. It is the most important known tumor suppressor gene: perhaps half of all human neoplasms have mutations in p53, and there is a remarkable concordance between oncogenic mutation and the loss of p53 transcriptional activity [2]. There is also compelling experimental evidence that loss of p53 function (by whatever means) is one of the key oncogenic steps in human cells, along with altered telomerase activity and expression of mutant ras [3]. So far, however, relatively few of the genes regulated by p53 have been identified and it is not even known how many binding sites there are for p53 in the genome, although an estimate based on the incidence of the canonical p53 consensus binding site (four palindromic copies of the sequence 5'-PuPuPuGA/T-3', where Pu is either purine) in a limited region suggests there may be as many as 200 to 300, possibly representing the same number of p53-responsive genes [4]. This makes the p53 response an attractive target for the emerging techniques for global analysis of gene expression, and two recent reports [5,6] illustrate the ways in which these techniques can be used to elucidate the spectrum of genes regulated by this key transcription factor. Vogelstein and colleagues [5] have used serial analysis of gene expression (SAGE) to identify 34 genes that exhibit at least a 10-fold upregulation in response to inducible expression of p53; Tanaka et al. [6] have used differential display to identify p53R2, a homolog of ribonuclease reductase small subunit (R2) as a target gene, thereby for the first time implicating p53 directly in the repair of DNA damage.
Resumo:
Recent studies predict elevated and accelerating rates of species extinctions over the 21st century, due to climate change and habitat loss. Considering that such primary species loss may initiate cascades of secondary extinctions and push systems towards critical tipping points, we urgently need to increase our understanding of if certain sequences of species extinctions can be expected to be more devastating than others Most theoretical studies addressing this question have used a topological (non-dynamical) approach to analyse the probability that food webs will collapse, below a fixed threshold value in species richness, when subjected to different sequences of species loss. Typically, these studies have neither considered the possibility of dynamical responses of species, nor that conclusions may depend on the value of the collapse threshold. Here we analyse how sensitive conclusions on the importance of different species are to the threshold value of food web collapse. Using dynamical simulations, where we expose model food webs to a range of extinction sequences, we evaluate the reliability of the most frequently used index, R<inf>50</inf>, as a measure of food web robustness. In general, we find that R<inf>50</inf> is a reliable measure and that identification of destructive deletion sequences is fairly robust, within a moderate range of collapse thresholds. At the same time, however, focusing on R<inf>50</inf> only hides a lot of interesting information on the disassembly process and can, in some cases, lead to incorrect conclusions on the relative importance of species in food webs.
Resumo:
The impact of buckling containment features on the stability of thin-gauge fuselage, metallic stiffened panels has previously been demonstrated. With the continuing developments in manufacturing technology, such as welding, extrusion, machining, and additive layer manufacture, understanding the benefits of additional panel design features on heavier applications, such as wing panels, is timely. This compression testing of thick-gauge panels with and without buckling containment features has been undertaken to verify buckling and collapse behaviors and validate sizing methods. The experimental results demonstrated individual panel mass savings on the order of 9%, and wing cover design studies demonstrated mass savings on the order of 4 to 13%, dependent on aircraft size and material choice.
Resumo:
We present a fully-distributed self-healing algorithm dex that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders—whose expansion properties holddeterministically—that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion whichrapidly degrade in a dynamic setting; in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only O(logn)O(logn) rounds and O(logn)O(logn) messages are needed with high probability (n is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table on top of dex with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes.
Gopal Pandurangan has been supported in part by Nanyang Technological University Grant M58110000, Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 2 Grant MOE2010-T2-2-082, MOE AcRF Tier 1 Grant MOE2012-T1-001-094, and the United States-Israel Binational Science Foundation (BSF) Grant 2008348. Peter Robinson has been supported by Grant MOE2011-T2-2-042 “Fault-tolerant Communication Complexity in Wireless Networks” from the Singapore MoE AcRF-2. Work done in part while the author was at the Nanyang Technological University and at the National University of Singapore. Amitabh Trehan has been supported by the Israeli Centers of Research Excellence (I-CORE) program (Center No. 4/11). Work done in part while the author was at Hebrew University of Jerusalem and at the Technion and supported by a Technion fellowship.