949 resultados para Optimized cooling


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Split System Approach (SSA) based methodology is presented to assist in making optimal Preventive Maintenance decisions for serial production lines. The methodology treats a production line as a complex series system with multiple PM actions over multiple intervals. Both risk related cost and maintenance related cost are factored into the methodology as either deterministic or random variables. This SSA based methodology enables Asset Management (AM) decisions to be optimized considering a variety of factors including failure probability, failure cost, maintenance cost, PM performance, and the type of PM strategy. The application of this new methodology and an evaluation of the effects of these factors on PM decisions are demonstrated using an example. The results of this work show that the performance of a PM strategy can be measured by its Total Expected Cost Index (TECI). The optimal PM interval is dependent on TECI, PM performance and types of PM strategies. These factors are interrelated. Generally it was found that a trade-off between reliability and the number of PM actions needs to be made so that one can minimize Total Expected Cost (TEC) for asset maintenance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, cell sheets comprising multilayered porcine bone marrow stromal cells (BMSC) were assembled with fully interconnected scaffolds made from medical-grade polycaprolactone–calcium phosphate (mPCL–CaP), for the engineering of structural and functional bone grafts. The BMSC sheets were harvested from culture flasks and wrapped around pre-seeded composite scaffolds. The layered cell sheets integrated well with the scaffold/cell construct and remained viable, with mineralized nodules visible both inside and outside the scaffold for up to 8 weeks culture. Cells within the constructs underwent classical in vitro osteogenic differentiation with the associated elevation of alkaline phosphatase activity and bone-related protein expression. In vivo, two sets of cell-sheet-scaffold/cell constructs were transplanted under the skin of nude rats. The first set of constructs (554mm3) were assembled with BMSC sheets and cultured for 8 weeks before implantation. The second set of constructs (10104mm3) was implanted immediately after assembly with BMSC sheets, with no further in vitro culture. For both groups, neo cortical and well-vascularised cancellous bone were formed within the constructs with up to 40% bone volume. Histological and immunohistochemical examination revealed that neo bone tissue formed from the pool of seeded BMSC and the bone formation followed predominantly an endochondral pathway, with woven bone matrix subsequently maturing into fully mineralized compact bone; exhibiting the histological markers of native bone. These findings demonstrate that large bone tissues similar to native bone can be regenerated utilizing BMSC sheet techniques in conjunction with composite scaffolds whose structures are optimized from a mechanical, nutrient transport and vascularization perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study investigated metabolic responses to fat and carbohydrate ingestion in lean male individuals consuming an habitual diet high or low in fat. Twelve high-fat phenotypes (HF) and twelve low-fat phenotypes (LF) participated in the study. Energy intake and macronutrient intake variables were assessed using a food frequency questionnaire. Resting (RMR) and postprandial metabolic rate and substrate oxidation (respiratory quotient; RQ) were measured by indirect calorimetry. HF had a significantly higher RMR and higher resting heart rate than LF. These variables remained higher in HF following the macronutrient challenge. In all subjects the carbohydrate load increased metabolic rate and heart rate significantly more than the fat load. Fat oxidation (indicated by a low RQ) was significantly higher in HF than in LF following the fat load; the ability to oxidise a high carbohydrate load did not differ between the groups. Lean male subjects consuming a diet high in fat were associated with increased energy expenditure at rest and a relatively higher fat oxidation in response to a high fat load; these observations may be partly responsible for maintaining energy balance on a high-fat (high-energy) diet. In contrast, a low consumer of fat is associated with relatively lower energy expenditure at rest and lower fat oxidation, which has implications for weight gain if high-fat foods or meals are periodically introduced to the diet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pore architecture of scaffolds is known to play a critical role in tissue engineering as it provides the vital framework for seeded cells to organize into a functioning tissue. In this report we have investigated the effects of different concentrations of silk fibroin protein on three-dimensional (3D) scaffold pore microstructure. Four pore size ranges of silk fibroin scaffolds were made by the freeze drying technique, with the pore sizes ranging from 50 to 300 lm. The pore sizes of the scaffolds decreased as the concentration of fibroin protein increased. Human bone marrow mesenchymal stromal cells (BMSC) transfected with the BMP7 gene were cultured in these scaffolds. A cell viability colorimetric assay, alkaline phosphatase assay and reverse transcription-polymerase chain reaction were performed to analyze the effect of pore size on cell growth, the secretion of extracellular matrix (ECM) and osteogenic differentiation. Cell migration in 3D scaffolds was confirmed by confocal microscopy. Calvarial defects in SCID mice were used to determine the bone forming ability of the silk fibroin scaffolds incorporating BMSC expressing BMP7. The results showed that BMSC expressing BMP7 preferred a pore size between 100 and 300 lm in silk fibroin protein fabricated scaffolds, with better cell proliferation and ECM production. Furthermore, in vivo transplantation of the silk fibroin scaffolds combined with BMSC expressing BMP7 induced new bone formation. This study has shown that an optimized pore architecture of silk fibroin scaffolds can modulate the bioactivity of BMP7-transfected BMSC in bone formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been a worldwide trend to increase axle loads and train speeds. This means that railway track degradation will be accelerated, and track maintenance costs will be increased significantly. There is a need to investigate the consequences of increasing traffic load. The aim of the research is to develop a model for the analysis of physical degradation of railway tracks in response to changes in traffic parameters, especially increased axle loads and train speeds. This research has developed an integrated track degradation model (ITDM) by integrating several models into a comprehensive framework. Mechanistic relationships for track degradation hav~ ?een used wherever possible in each of the models contained in ITDM. This overcc:mes the deficiency of the traditional statistical track models which rely heavily on historical degradation data, which is generally not available in many railway systems. In addition statistical models lack the flexibility of incorporating future changes in traffic patterns or maintenance practices. The research starts with reviewing railway track related studies both in Australia and overseas to develop a comprehensive understanding of track performance under various traffic conditions. Existing railway related models are then examined for their suitability for track degradation analysis for Australian situations. The ITDM model is subsequently developed by modifying suitable existing models, and developing new models where necessary. The ITDM model contains four interrelated submodels for rails, sleepers, ballast and subgrade, and track modulus. The rail submodel is for rail wear analysis and is developed from a theoretical concept. The sleeper submodel is for timber sleepers damage prediction. The submodel is developed by modifying and extending an existing model developed elsewhere. The submodel has also incorporated an analysis for the likelihood of concrete sleeper cracking. The ballast and subgrade submodel is evolved from a concept developed in the USA. Substantial modifications and improvements have been made. The track modulus submodel is developed from a conceptual method. Corrections for more global track conditions have been made. The integration of these submodels into one comprehensive package has enabled the interaction between individual track components to be taken into account. This is done by calculating wheel load distribution with time and updating track conditions periodically in the process of track degradation simulation. A Windows-based computer program ~ssociated with ITDM has also been developed. The program enables the user to carry out analysis of degradation of individual track components and to investigate the inter relationships between these track components and their deterioration. The successful implementation of this research has provided essential information for prediction of increased maintenance as a consequence of railway trackdegradation. The model, having been presented at various conferences and seminars, has attracted wide interest. It is anticipated that the model will be put into practical use among Australian railways, enabling track maintenance planning to be optimized and potentially saving Australian railway systems millions of dollars in operating costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shell structures find use in many fields of engineering, notably structural, mechanical, aerospace and nuclear-reactor disciplines. Axisymmetric shell structures are used as dome type of roofs, hyperbolic cooling towers, silos for storage of grain, oil and industrial chemicals and water tanks. Despite their thin walls, strength is derived due to the curvature. The generally high strength-to-weight ratio of the shell form, combined with its inherent stiffness, has formed the basis of this vast application. With the advent in computation technology, the finite element method and optimisation techniques, structural engineers have extremely versatile tools for the optimum design of such structures. Optimisation of shell structures can result not only in improved designs, but also in a large saving of material. The finite element method being a general numerical procedure that could be used to treat any shell problem to any desired degree of accuracy, requires several runs in order to obtain a complete picture of the effect of one parameter on the shell structure. This redesign I re-analysis cycle has been achieved via structural optimisation in the present research, and MSC/NASTRAN (a commercially available finite element code) has been used in this context for volume optimisation of axisymmetric shell structures under axisymmetric and non-axisymmetric loading conditions. The parametric study of different axisymmetric shell structures has revealed that the hyperbolic shape is the most economical solution of shells of revolution. To establish this, axisymmetric loading; self-weight and hydrostatic pressure, and non-axisymmetric loading; wind pressure and earthquake dynamic forces have been modelled on graphical pre and post processor (PATRAN) and analysis has been performed on two finite element codes (ABAQUS and NASTRAN), numerical model verification studies are performed, and optimum material volume required in the walls of cylindrical, conical, parabolic and hyperbolic forms of axisymmetric shell structures are evaluated and reviewed. Free vibration and transient earthquake analysis of hyperbolic shells have been performed once it was established that hyperbolic shape is the most economical under all possible loading conditions. Effect of important parameters of hyperbolic shell structures; shell wall thickness, height and curvature, have been evaluated and empirical relationships have been developed to estimate an approximate value of the lowest (first) natural frequency of vibration. The outcome of this thesis has been the generation of new research information on performance characteristics of axisymmetric shell structures that will facilitate improved designs of shells with better choice of shapes and enhanced levels of economy and performance. Key words; Axisymmetric shell structures, Finite element analysis, Volume Optimisation_ Free vibration_ Transient response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a potent agricultural greenhouse gas (GHG). More than 50% of the global anthropogenic N2O flux is attributable to emissions from soil, primarily due to large fertilizer nitrogen (N) applications to corn and other non-leguminous crops. Quantification of the trade–offs between N2O emissions, fertilizer N rate, and crop yield is an essential requirement for informing management strategies aiming to reduce the agricultural sector GHG burden, without compromising productivity and producer livelihood. There is currently great interest in developing and implementing agricultural GHG reduction offset projects for inclusion within carbon offset markets. Nitrous oxide, with a global warming potential (GWP) of 298, is a major target for these endeavours due to the high payback associated with its emission prevention. In this paper we use robust quantitative relationships between fertilizer N rate and N2O emissions, along with a recently developed approach for determining economically profitable N rates for optimized crop yield, to propose a simple, transparent, and robust N2O emission reduction protocol (NERP) for generating agricultural GHG emission reduction credits. This NERP has the advantage of providing an economic and environmental incentive for producers and other stakeholders, necessary requirements in the implementation of agricultural offset projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a major greenhouse gas (GHG) product of intensive agriculture. Fertilizer nitrogen (N) rate is the best single predictor of N2O emissions in row-crop agriculture in the US Midwest. We use this relationship to propose a transparent, scientifically robust protocol that can be utilized by developers of agricultural offset projects for generating fungible GHG emission reduction credits for the emerging US carbon cap and trade market. By coupling predicted N2O flux with the recently developed maximum return to N (MRTN) approach for determining economically profitable N input rates for optimized crop yield, we provide the basis for incentivizing N2O reductions without affecting yields. The protocol, if widely adopted, could reduce N2O from fertilized row-crop agriculture by more than 50%. Although other management and environmental factors can influence N2O emissions, fertilizer N rate can be viewed as a single unambiguous proxy—a transparent, tangible, and readily manageable commodity. Our protocol addresses baseline establishment, additionality, permanence, variability, and leakage, and provides for producers and other stakeholders the economic and environmental incentives necessary for adoption of agricultural N2O reduction offset projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summary of Actions Towards Sustainable Outcomes Environmental Issues / Principal Impacts The increased growth of cities is intensifying its impact on people and the environment through: • increased use of energy for the heating and cooling of more buildings, leading to urban heat islands and more greenhouse gas emissions • increased amount of hard surfaces contributing to higher temperatures in cities and more stormwater runoff • degraded air quality and noise impact • reduced urban biodiversity • compromised health and general well-being of people Basic Strategies In many design situations boundaries and constraints limit the application of cutting EDGe actions. In these circumstances designers should at least consider the following: • Consider green roofs early in the design process in consultation with all stakeholders to enable maximised integration with building systems and to mitigate building cost (avoid constructing as a retrofit). • Design of the green roof as part of a building’s structural, mechanical and hydraulic systems could lead to structural efficiency, the ability to optimise cooling benefits and better integrated water recycling systems. • Inform the selection of the type of green roof by considering its function, for example designing for social activity, required maintenance/access regime, recycling of water or habitat regeneration or a combination of uses. • Evaluate existing surroundings to determine possible links to the natural environment and choice of vegetation for the green roof with availability of local plant supply and expertise. Cutting EDGe Strategies • Create green roofs to contribute positively to the environment through reduced urban heat island effect and building temperatures, to improved stormwater quality, increased natural habitats, provision of social spaces and opportunity for increased local food supply. • Maximise solar panel efficiency by incorporating with design of green roof. • Integrate multiple functions for a single green roof such as grey water recycling, food production, more bio-diverse plantings, air quality improvement and provision of delightful spaces for social interaction. Synergies & references • BEDP Environment Design Guide DES 53: Roof and Facade Gardens GEN 4: Positive Development – designing for Net Positive Impacts TEC 26: Living Walls - a way to green the built environment • Green Roofs Australia: www.greenroofs.wordpress.com • International Green Roof Association: www.igra-world.com • Green Roofs for Healthy Cities (USA): www.greenroofs.org • Centre for Urban Greenery and Ecology (Singapore): http://research.cuge.com.sg

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a Genetic Algorithms (GA) approach to search the optimized path for a class of transportation problems. The formulation of the problems for suitable application of GA will be discussed. Exchanging genetic information in the sense of neighborhoods will be introduced for generation reproduction. The performance of the GA will be evaluated by computer simulation. The proposed algorithm use simple coding with population size 1 converged in reasonable optimality within several minutes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polymer networks were prepared by photocross-linking fumaric acid monoethyl ester (FAME) functionalized, three-armed poly(D,L-lactide) oligomers using Af-vinyl-2-pyrrolidone (NVP) as diluent and comonomer. The use of NVP together with FAME-functionalized oligomers resulted in copolymerization at high rates, and networks with gel contents in excess of 90 were obtained. The hydrophilicity of the poly(D,L-lactide) networks increases with increasing amounts of NVP, networks containing 50 wt of NVP absorbed 40 of water. As the amount of NVP was increased from 30 to 50 wt , the Young's modulus after equilibration in water decreased from 0.8 to 0.2 GPa, as opposed to an increase from 1.5 to 2.1 GPa in the dry state. Mouse preosteoblasts readily adhered and spread onto all prepared networks. Using stereolithography, porous structures with a well-defined gyroid architecture were prepared from these novel materials. This allows the preparation of tissue engineering scaffolds with optimized pore architecture and tunable material properties.