943 resultados para set-point
Resumo:
FOR SUGAR factories with cogeneration plants major changes to the process stations have been undertaken to reduce the consumption of exhaust steam from the turbines and maximise the generated power. In many cases the process steam consumption has been reduced from greater than 52% on cane to ~40% on cane. The main changes have been to install additional evaporation area at the front of the set, operate the pan stages on vapour from No 1 or No 2 effects and undertake juice heating using vapour bleed from evaporators as far down the set as the penultimate stage. Operationally, one of the main challenges has been to develop a control system for the evaporators that addresses the objectives of juice processing rate (throughput) and steam economy, while producing syrup consistently at the required brix and providing an adequate and consistent vapour pressure for the pan stage operations. The cyclic demand for vapour by batch pans causes process disturbances through the evaporator set and these must be regulated in an effective manner to satisfy the above list of objectives for the evaporator station. The impact of the cyclic pan stage vapour demand has been modelled to define the impact on juice rate, steam economy, syrup brix and head space pressures in the evaporators. Experiences with the control schemes used at Pioneer and Rocky Point Mills are discussed. For each factory the paper provides information on (a) the control system used, the philosophy behind the control system and experiences in reaching the current system for control (b) the performance of the control system to handle the disturbances imposed by the pan stage and operate within other constraints of the factory (c) deficiencies in the current system and plans for further improvements. Other processing changes to boost the performance of the evaporators are also discussed.
Resumo:
An approach is proposed and applied to five industries to prove how phenomenology can be valuable in rethinking consumer markets (Popp & Holt, 2013). The purpose of this essay is to highlight the potential implications that 'phenomenological thinking' brings for competitiveness and innovation (Sanders, 1982), hence helping managers being more innovative in their strategic marketing decisions (i.e. market creation, positioning, branding). Phenomenology is in fact a way of thinking − besides and before being a qualitative research procedure − a very practical exercise that strategic managers can master and apply in the same successful way as other scientists have already done in their fields of study (e.g. sociology, psychology, psychiatry, and anthropology). Two fundamental considerations justify this research: a lack of distinctiveness among firms due to high levels of competition and consumers no longer knowing what they want (i.e. no more needs). The authors will show how the classical mental framework generally used to study markets by practitioners appears on the one hand to be established and systematic in the life of a company, while on the other is no longer adequate to meet the needs of innovation required to survive. To the classic principles of objectivity, generality, and psycho-sociology the authors counterpose the imaginary, eidetic-phenomenological reduction, and an existential perspective. From a theoretical point of view, this paper introduces a set of functioning rules applicable to achieve innovation in any market and useful to identify cultural practices inherent in the act of consumption.
Resumo:
Numeric sets can be used to store and distribute important information such as currency exchange rates and stock forecasts. It is useful to watermark such data for proving ownership in case of illegal distribution by someone. This paper analyzes the numerical set watermarking model presented by Sion et. al in “On watermarking numeric sets”, identifies it’s weaknesses, and proposes a novel scheme that overcomes these problems. One of the weaknesses of Sion’s watermarking scheme is the requirement to have a normally-distributed set, which is not true for many numeric sets such as forecast figures. Experiments indicate that the scheme is also susceptible to subset addition and secondary watermarking attacks. The watermarking model we propose can be used for numeric sets with arbitrary distribution. Theoretical analysis and experimental results show that the scheme is strongly resilient against sorting, subset selection, subset addition, distortion, and secondary watermarking attacks.
Resumo:
Ever since Cox et. al published their paper, “A Secure, Robust Watermark for Multimedia” in 1996 [6], there has been tremendous progress in multimedia watermarking. The same pattern re-emerged with Agrawal and Kiernan publishing their work “Watermarking Relational Databases” in 2001 [1]. However, little attention has been given to primitive data collections with only a handful works of research known to the authors [11, 10]. This is primarily due to the absence of an attribute that differentiates marked items from unmarked item during insertion and detection process. This paper presents a distribution-independent, watermarking model that is secure against secondary-watermarking in addition to conventional attacks such as data addition, deletion and distortion. The low false positives and high capacity provide additional strength to the scheme. These claims are backed by experimental results provided in the paper.
Resumo:
This project explores issues confronted when authoring a previously authored story, one received from history. Using the defection of Soviet spies, Vladimir and Evdokia Petrov as its focal point, it details how a screenwriter addresses issues arising in the adaptation of both fictional and biographical representations suitable for contemporary cinema. Textual fidelity and concepts of interpretation, aesthetics and audience, negotiating factual and fictional imperatives, authorial visibility and invisibility, moral and ethical conundrums are negotiated and a set of guiding principles emerge from this practice-led investigation.
Resumo:
This article takes as its starting point the observation that neoliberalism is a concept that is ‘oft-invoked but ill-defined’. It provides a taxonomy of uses of the term neoliberalism to include: (1) an all-purpose denunciatory category; (2) ‘the way things are’; (3) an institutional framework characterizing particular forms of national capitalism, most notably the Anglo-American ones; (4) a dominant ideology of global capitalism; (5) a form of governmentality and hegemony; and (6) a variant within the broad framework of liberalism as both theory and policy discourse. It is argued that this sprawling set of definitions are not mutually compatible, and that uses of the term need to be dramatically narrowed from its current association with anything and everything that a particular author may find objectionable. In particular, it is argued that the uses of the term by Michel Foucault in his 1978–9 lectures, found in The Birth of Biopolitics, are not particularly compatible with its more recent status as a variant of dominant ideology or hegemony theories. It instead proposes understanding neoliberalism in terms of historical institutionalism, with Foucault’s account of historical change complementing MaxWeber’s work identifying the distinctive economic sociology of national capitalisms.
Resumo:
This study reports on the utilisation of the Manchester Driver Behaviour Questionnaire (DBQ) to examine the self-reported driving behaviours of a large sample of Australian fleet drivers (N = 3414). Surveys were completed by employees before they commenced a one day safety workshop intervention. Factor analysis techniques identified a three factor solution similar to previous research, which was comprised of: (a) errors, (b) highway-code violations and (c) aggressive driving violations. Two items traditionally related with highway-code violations were found to be associated with aggressive driving behaviours among the current sample. Multivariate analyses revealed that exposure to the road, errors and self-reported offences predicted crashes at work in the last 12 months, while gender, highway violations and crashes predicted offences incurred while at work. Importantly, those who received more fines at work were at an increased risk of crashing the work vehicle. However, overall, the DBQ demonstrated limited efficacy at predicting these two outcomes. This paper outlines the major findings of the study in regards to identifying and predicting aberrant driving behaviours and also highlights implications regarding the future utilisation of the DBQ within fleet settings.
Resumo:
We investigate whether framing effects of voluntary contributions are significant in a provision point mechanism. Our results show that framing significantly affects individuals of the same type: cooperative individuals appear to be more cooperative in the public bads game than in the public goods game, whereas individualistic subjects appear to be less cooperative in the public bads game than in the public goods game. At the aggregate level of pooling all individuals, the data suggests that framing effects are negligible, which is in contrast with the established result.
Resumo:
Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.
Resumo:
"Every year deliberately lit fires rage across Indonesia. They destroy pristine rainforest, endanger orangutans and contribute to climate change. A young carbon trading entrepreneur goes in search of a solution." "Dorjee Sun, a young Australian Entrepreneur, believes there's money to be made from protecting rainforests in Indonesia, saving the orangutan from extinction and making a real impact on climate change. Armed with a laptop and a backpack, he sets out across the globe to find investors in his carbon trading scheme. It is a battle against time. Achmadi, the palm oil farmer is ready to set fire to his land to plant more palm oil, and Lone's orangutan centre has reached crisis point with over 600 orangutans rescued from the fires. The Burning Season is an eco-thriller about a young man not afraid to confront the biggest challenge of our time."
Resumo:
Quantifying the impact of biochemical compounds on collective cell spreading is an essential element of drug design, with various applications including developing treatments for chronic wounds and cancer. Scratch assays are a technically simple and inexpensive method used to study collective cell spreading; however, most previous interpretations of scratch assays are qualitative and do not provide estimates of the cell diffusivity, D, or the cell proliferation rate,l. Estimating D and l is important for investigating the efficacy of a potential treatment and provides insight into the mechanism through which the potential treatment acts. While a few methods for estimating D and l have been proposed, these previous methods lead to point estimates of D and l, and provide no insight into the uncertainty in these estimates. Here, we compare various types of information that can be extracted from images of a scratch assay, and quantify D and l using discrete computational simulations and approximate Bayesian computation. We show that it is possible to robustly recover estimates of D and l from synthetic data, as well as a new set of experimental data. For the first time, our approach also provides a method to estimate the uncertainty in our estimates of D and l. We anticipate that our approach can be generalized to deal with more realistic experimental scenarios in which we are interested in estimating D and l, as well as additional relevant parameters such as the strength of cell-to-cell adhesion or the strength of cell-to-substrate adhesion.
Resumo:
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach as matching under dramatic appearance changes is a brittle and hard thing. Point feature detectors are fixed and rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria applied all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21km of data collected over a period of 3 months, that our system is capable of producing metric localisation estimates from night-to-day or summer-to-winter conditions.
Resumo:
Detailed knowledge of the past history of an active volcano is crucial for the prediction of the timing, frequency and style of future eruptions, and for the identification of potentially at-risk areas. Subaerial volcanic stratigraphies are often incomplete, due to a lack of exposure, or burial and erosion from subsequent eruptions. However, many volcanic eruptions produce widely-dispersed explosive products that are frequently deposited as tephra layers in the sea. Cores of marine sediment therefore have the potential to provide more complete volcanic stratigraphies, at least for explosive eruptions. Nevertheless, problems such as bioturbation and dispersal by currents affect the preservation and subsequent detection of marine tephra deposits. Consequently, cryptotephras, in which tephra grains are not sufficiently concentrated to form layers that are visible to the naked eye, may be the only record of many explosive eruptions. Additionally, thin, reworked deposits of volcanic clasts transported by floods and landslides, or during pyroclastic density currents may be incorrectly interpreted as tephra fallout layers, leading to the construction of inaccurate records of volcanism. This work uses samples from the volcanic island of Montserrat as a case study to test different techniques for generating volcanic eruption records from marine sediment cores, with a particular relevance to cores sampled in relatively proximal settings (i.e. tens of kilometres from the volcanic source) where volcaniclastic material may form a pervasive component of the sedimentary sequence. Visible volcaniclastic deposits identified by sedimentological logging were used to test the effectiveness of potential alternative volcaniclastic-deposit detection techniques, including point counting of grain types (component analysis), glass or mineral chemistry, colour spectrophotometry, grain size measurements, XRF core scanning, magnetic susceptibility and X-radiography. This study demonstrates that a set of time-efficient, non-destructive and high-spatial-resolution analyses (e.g. XRF core-scanning and magnetic susceptibility) can be used effectively to detect potential cryptotephra horizons in marine sediment cores. Once these horizons have been sampled, microscope image analysis of volcaniclastic grains can be used successfully to discriminate between tephra fallout deposits and other volcaniclastic deposits, by using specific criteria related to clast morphology and sorting. Standard practice should be employed when analysing marine sediment cores to accurately identify both visible tephra and cryptotephra deposits, and to distinguish fallout deposits from other volcaniclastic deposits.