967 resultados para optimized


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Split System Approach (SSA) based methodology is presented to assist in making optimal Preventive Maintenance decisions for serial production lines. The methodology treats a production line as a complex series system with multiple PM actions over multiple intervals. Both risk related cost and maintenance related cost are factored into the methodology as either deterministic or random variables. This SSA based methodology enables Asset Management (AM) decisions to be optimized considering a variety of factors including failure probability, failure cost, maintenance cost, PM performance, and the type of PM strategy. The application of this new methodology and an evaluation of the effects of these factors on PM decisions are demonstrated using an example. The results of this work show that the performance of a PM strategy can be measured by its Total Expected Cost Index (TECI). The optimal PM interval is dependent on TECI, PM performance and types of PM strategies. These factors are interrelated. Generally it was found that a trade-off between reliability and the number of PM actions needs to be made so that one can minimize Total Expected Cost (TEC) for asset maintenance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, cell sheets comprising multilayered porcine bone marrow stromal cells (BMSC) were assembled with fully interconnected scaffolds made from medical-grade polycaprolactone–calcium phosphate (mPCL–CaP), for the engineering of structural and functional bone grafts. The BMSC sheets were harvested from culture flasks and wrapped around pre-seeded composite scaffolds. The layered cell sheets integrated well with the scaffold/cell construct and remained viable, with mineralized nodules visible both inside and outside the scaffold for up to 8 weeks culture. Cells within the constructs underwent classical in vitro osteogenic differentiation with the associated elevation of alkaline phosphatase activity and bone-related protein expression. In vivo, two sets of cell-sheet-scaffold/cell constructs were transplanted under the skin of nude rats. The first set of constructs (554mm3) were assembled with BMSC sheets and cultured for 8 weeks before implantation. The second set of constructs (10104mm3) was implanted immediately after assembly with BMSC sheets, with no further in vitro culture. For both groups, neo cortical and well-vascularised cancellous bone were formed within the constructs with up to 40% bone volume. Histological and immunohistochemical examination revealed that neo bone tissue formed from the pool of seeded BMSC and the bone formation followed predominantly an endochondral pathway, with woven bone matrix subsequently maturing into fully mineralized compact bone; exhibiting the histological markers of native bone. These findings demonstrate that large bone tissues similar to native bone can be regenerated utilizing BMSC sheet techniques in conjunction with composite scaffolds whose structures are optimized from a mechanical, nutrient transport and vascularization perspective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is about the derivation of the addition law on an arbitrary elliptic curve and efficiently adding points on this elliptic curve using the derived addition law. The outcomes of this research guarantee practical speedups in higher level operations which depend on point additions. In particular, the contributions immediately find applications in cryptology. Mastered by the 19th century mathematicians, the study of the theory of elliptic curves has been active for decades. Elliptic curves over finite fields made their way into public key cryptography in late 1980’s with independent proposals by Miller [Mil86] and Koblitz [Kob87]. Elliptic Curve Cryptography (ECC), following Miller’s and Koblitz’s proposals, employs the group of rational points on an elliptic curve in building discrete logarithm based public key cryptosystems. Starting from late 1990’s, the emergence of the ECC market has boosted the research in computational aspects of elliptic curves. This thesis falls into this same area of research where the main aim is to speed up the additions of rational points on an arbitrary elliptic curve (over a field of large characteristic). The outcomes of this work can be used to speed up applications which are based on elliptic curves, including cryptographic applications in ECC. The aforementioned goals of this thesis are achieved in five main steps. As the first step, this thesis brings together several algebraic tools in order to derive the unique group law of an elliptic curve. This step also includes an investigation of recent computer algebra packages relating to their capabilities. Although the group law is unique, its evaluation can be performed using abundant (in fact infinitely many) formulae. As the second step, this thesis progresses the finding of the best formulae for efficient addition of points. In the third step, the group law is stated explicitly by handling all possible summands. The fourth step presents the algorithms to be used for efficient point additions. In the fifth and final step, optimized software implementations of the proposed algorithms are presented in order to show that theoretical speedups of step four can be practically obtained. In each of the five steps, this thesis focuses on five forms of elliptic curves over finite fields of large characteristic. A list of these forms and their defining equations are given as follows: (a) Short Weierstrass form, y2 = x3 + ax + b, (b) Extended Jacobi quartic form, y2 = dx4 + 2ax2 + 1, (c) Twisted Hessian form, ax3 + y3 + 1 = dxy, (d) Twisted Edwards form, ax2 + y2 = 1 + dx2y2, (e) Twisted Jacobi intersection form, bs2 + c2 = 1, as2 + d2 = 1, These forms are the most promising candidates for efficient computations and thus considered in this work. Nevertheless, the methods employed in this thesis are capable of handling arbitrary elliptic curves. From a high level point of view, the following outcomes are achieved in this thesis. - Related literature results are brought together and further revisited. For most of the cases several missed formulae, algorithms, and efficient point representations are discovered. - Analogies are made among all studied forms. For instance, it is shown that two sets of affine addition formulae are sufficient to cover all possible affine inputs as long as the output is also an affine point in any of these forms. In the literature, many special cases, especially interactions with points at infinity were omitted from discussion. This thesis handles all of the possibilities. - Several new point doubling/addition formulae and algorithms are introduced, which are more efficient than the existing alternatives in the literature. Most notably, the speed of extended Jacobi quartic, twisted Edwards, and Jacobi intersection forms are improved. New unified addition formulae are proposed for short Weierstrass form. New coordinate systems are studied for the first time. - An optimized implementation is developed using a combination of generic x86-64 assembly instructions and the plain C language. The practical advantages of the proposed algorithms are supported by computer experiments. - All formulae, presented in the body of this thesis, are checked for correctness using computer algebra scripts together with details on register allocations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pore architecture of scaffolds is known to play a critical role in tissue engineering as it provides the vital framework for seeded cells to organize into a functioning tissue. In this report we have investigated the effects of different concentrations of silk fibroin protein on three-dimensional (3D) scaffold pore microstructure. Four pore size ranges of silk fibroin scaffolds were made by the freeze drying technique, with the pore sizes ranging from 50 to 300 lm. The pore sizes of the scaffolds decreased as the concentration of fibroin protein increased. Human bone marrow mesenchymal stromal cells (BMSC) transfected with the BMP7 gene were cultured in these scaffolds. A cell viability colorimetric assay, alkaline phosphatase assay and reverse transcription-polymerase chain reaction were performed to analyze the effect of pore size on cell growth, the secretion of extracellular matrix (ECM) and osteogenic differentiation. Cell migration in 3D scaffolds was confirmed by confocal microscopy. Calvarial defects in SCID mice were used to determine the bone forming ability of the silk fibroin scaffolds incorporating BMSC expressing BMP7. The results showed that BMSC expressing BMP7 preferred a pore size between 100 and 300 lm in silk fibroin protein fabricated scaffolds, with better cell proliferation and ECM production. Furthermore, in vivo transplantation of the silk fibroin scaffolds combined with BMSC expressing BMP7 induced new bone formation. This study has shown that an optimized pore architecture of silk fibroin scaffolds can modulate the bioactivity of BMP7-transfected BMSC in bone formation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been a worldwide trend to increase axle loads and train speeds. This means that railway track degradation will be accelerated, and track maintenance costs will be increased significantly. There is a need to investigate the consequences of increasing traffic load. The aim of the research is to develop a model for the analysis of physical degradation of railway tracks in response to changes in traffic parameters, especially increased axle loads and train speeds. This research has developed an integrated track degradation model (ITDM) by integrating several models into a comprehensive framework. Mechanistic relationships for track degradation hav~ ?een used wherever possible in each of the models contained in ITDM. This overcc:mes the deficiency of the traditional statistical track models which rely heavily on historical degradation data, which is generally not available in many railway systems. In addition statistical models lack the flexibility of incorporating future changes in traffic patterns or maintenance practices. The research starts with reviewing railway track related studies both in Australia and overseas to develop a comprehensive understanding of track performance under various traffic conditions. Existing railway related models are then examined for their suitability for track degradation analysis for Australian situations. The ITDM model is subsequently developed by modifying suitable existing models, and developing new models where necessary. The ITDM model contains four interrelated submodels for rails, sleepers, ballast and subgrade, and track modulus. The rail submodel is for rail wear analysis and is developed from a theoretical concept. The sleeper submodel is for timber sleepers damage prediction. The submodel is developed by modifying and extending an existing model developed elsewhere. The submodel has also incorporated an analysis for the likelihood of concrete sleeper cracking. The ballast and subgrade submodel is evolved from a concept developed in the USA. Substantial modifications and improvements have been made. The track modulus submodel is developed from a conceptual method. Corrections for more global track conditions have been made. The integration of these submodels into one comprehensive package has enabled the interaction between individual track components to be taken into account. This is done by calculating wheel load distribution with time and updating track conditions periodically in the process of track degradation simulation. A Windows-based computer program ~ssociated with ITDM has also been developed. The program enables the user to carry out analysis of degradation of individual track components and to investigate the inter relationships between these track components and their deterioration. The successful implementation of this research has provided essential information for prediction of increased maintenance as a consequence of railway trackdegradation. The model, having been presented at various conferences and seminars, has attracted wide interest. It is anticipated that the model will be put into practical use among Australian railways, enabling track maintenance planning to be optimized and potentially saving Australian railway systems millions of dollars in operating costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a potent agricultural greenhouse gas (GHG). More than 50% of the global anthropogenic N2O flux is attributable to emissions from soil, primarily due to large fertilizer nitrogen (N) applications to corn and other non-leguminous crops. Quantification of the trade–offs between N2O emissions, fertilizer N rate, and crop yield is an essential requirement for informing management strategies aiming to reduce the agricultural sector GHG burden, without compromising productivity and producer livelihood. There is currently great interest in developing and implementing agricultural GHG reduction offset projects for inclusion within carbon offset markets. Nitrous oxide, with a global warming potential (GWP) of 298, is a major target for these endeavours due to the high payback associated with its emission prevention. In this paper we use robust quantitative relationships between fertilizer N rate and N2O emissions, along with a recently developed approach for determining economically profitable N rates for optimized crop yield, to propose a simple, transparent, and robust N2O emission reduction protocol (NERP) for generating agricultural GHG emission reduction credits. This NERP has the advantage of providing an economic and environmental incentive for producers and other stakeholders, necessary requirements in the implementation of agricultural offset projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is a major greenhouse gas (GHG) product of intensive agriculture. Fertilizer nitrogen (N) rate is the best single predictor of N2O emissions in row-crop agriculture in the US Midwest. We use this relationship to propose a transparent, scientifically robust protocol that can be utilized by developers of agricultural offset projects for generating fungible GHG emission reduction credits for the emerging US carbon cap and trade market. By coupling predicted N2O flux with the recently developed maximum return to N (MRTN) approach for determining economically profitable N input rates for optimized crop yield, we provide the basis for incentivizing N2O reductions without affecting yields. The protocol, if widely adopted, could reduce N2O from fertilized row-crop agriculture by more than 50%. Although other management and environmental factors can influence N2O emissions, fertilizer N rate can be viewed as a single unambiguous proxy—a transparent, tangible, and readily manageable commodity. Our protocol addresses baseline establishment, additionality, permanence, variability, and leakage, and provides for producers and other stakeholders the economic and environmental incentives necessary for adoption of agricultural N2O reduction offset projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a Genetic Algorithms (GA) approach to search the optimized path for a class of transportation problems. The formulation of the problems for suitable application of GA will be discussed. Exchanging genetic information in the sense of neighborhoods will be introduced for generation reproduction. The performance of the GA will be evaluated by computer simulation. The proposed algorithm use simple coding with population size 1 converged in reasonable optimality within several minutes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polymer networks were prepared by photocross-linking fumaric acid monoethyl ester (FAME) functionalized, three-armed poly(D,L-lactide) oligomers using Af-vinyl-2-pyrrolidone (NVP) as diluent and comonomer. The use of NVP together with FAME-functionalized oligomers resulted in copolymerization at high rates, and networks with gel contents in excess of 90 were obtained. The hydrophilicity of the poly(D,L-lactide) networks increases with increasing amounts of NVP, networks containing 50 wt of NVP absorbed 40 of water. As the amount of NVP was increased from 30 to 50 wt , the Young's modulus after equilibration in water decreased from 0.8 to 0.2 GPa, as opposed to an increase from 1.5 to 2.1 GPa in the dry state. Mouse preosteoblasts readily adhered and spread onto all prepared networks. Using stereolithography, porous structures with a well-defined gyroid architecture were prepared from these novel materials. This allows the preparation of tissue engineering scaffolds with optimized pore architecture and tunable material properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study reports on the impact of a "drink driving education program" taught to grade ten high school students. The program which involves twelve lessons uses strategies based on the Ajzen and Madden theory of planned behavior. Students were trained to use alternatives to drink driving and passenger behaviors. One thousand seven hundred and seventy-four students who had been taught the program in randomly assigned control and intervention schools were followed up three years later. There had been a major reduction in drink driving behaviors in both intervention and control students. In addition to this cohort change there was a trend toward reduced drink driving in the intervention group and a significant reduction in passenger behavior in this group. Readiness to use alternatives suggested that the major impact of the program was on students who were experimenting with the behavior at the time the program was taught. The program seems to have optimized concurrent social attitude and behavior change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean processes are dynamic and complex events that occur on multiple different spatial and temporal scales. To obtain a synoptic view of such events, ocean scientists focus on the collection of long-term time series data sets. Generally, these time series measurements are continually provided in real or near-real time by fixed sensors, e.g., buoys and moorings. In recent years, an increase in the utilization of mobile sensor platforms, e.g., Autonomous Underwater Vehicles, has been seen to enable dynamic acquisition of time series data sets. However, these mobile assets are not utilized to their full capabilities, generally only performing repeated transects or user-defined patrolling loops. Here, we provide an extension to repeated patrolling of a designated area. Our algorithms provide the ability to adapt a standard mission to increase information gain in areas of greater scientific interest. By implementing a velocity control optimization along the predefined path, we are able to increase or decrease spatiotemporal sampling resolution to satisfy the sampling requirements necessary to properly resolve an oceanic phenomenon. We present a path planning algorithm that defines a sampling path, which is optimized for repeatability. This is followed by the derivation of a velocity controller that defines how the vehicle traverses the given path. The application of these tools is motivated by an ongoing research effort to understand the oceanic region off the coast of Los Angeles, California. The computed paths are implemented with the computed velocities onto autonomous vehicles for data collection during sea trials. Results from this data collection are presented and compared for analysis of the proposed technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated, validated, and applied the optimum conditions for a modified microwave assisted digestion method for subsequent ICP-MS determination of mercury, cadmium, and lead in two matrices relevant to water quality, that is, sediment and fish. Three different combinations of power, pressure, and time conditions for microwave-assisted digestion were tested, using two certified reference materials representing the two matrices, to determine the optimum set of conditions. Validation of the optimized method indicated better recovery of the studied metals compared to standard methods. The validated method was applied to sediment and fish samples collected from Agusan River and one of its tributaries, located in Eastern Mindanao, Philippines. The metal concentrations in sediment ranged from 2.85 to 341.06 mg/kg for Hg, 0.05 to 44.46 mg/kg for Cd and 2.20 to 1256.16 mg/kg for Pb. The results indicate that the concentrations of these metals in the sediments rapidly decrease with distance downstream from sites of contamination. In the selected fish species, the metals were detected but at levels that are considered safe for human consumption, with concentrations of 2.14 to 6.82 μg/kg for Hg, 0.035 to 0.068 μg/kg for Cd, and 0.019 to 0.529 μg/kg for Pb.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lithium niobate powders from the raw powders of Li2 O5 are directly synthesized by a combustion method with urea fuel. The synthesis parameters (e.g. the calcination temperature, calcination time, and urea-to-(Li2 CO3 + Nb2 O5) quantity ratio) are studied to reveal the optimized synthesis conditions for preparing high-quality lithium niobate powders. In our present work, it is found that a urea-to-(Li2 CO3 + Nb2 O5) ratio close to 3, calcination temperature at 550-600 degrees and reaction time around 2.5h may lead to high-quality lithium niobate powsers. The microstructure of synthesized powders is further studied; a possible mechanism of the involved reactions is also proposed.