929 resultados para Solution-process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, low surface energy separators With undercut structures were fabricated through a full solution process, These low Surface energy separators are more suitable for application in inkjet printed passive-matrix displays of polymer light-emitting diodes. A patterned PS film was formed on the P4VP/photoresist film by microtransfer printing firstly. Patterned Au-coated Ni film was formed on the uncovered P4VP/photoresist film by electroless deposition. This metal film was used as mask to pattern the photoresist layer and form undercut structures with the patterned photoresist layer. The surface energy of the metal film also decreased dramatically from 84.6 mj/m(2) to 21.1 mJ/m(2) by modification of fluorinated mercaptan self-assemble monolayer on Au surface. The low surface energy separators were used to confine the flow of inkjet printed PFO solution and improve the patterning resolution of inkjet printing successfully. Separated PFO stripes, complement with the pattern of the separators, formed through inkjet printing.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Solution-processed polymer films are used in multiple technological applications. The presence of residual solvent in the film, as a consequence of the preparation method, affects the material properties, so films are typically subjected to post-deposition thermal annealing treatments aiming at its elimination. Monitoring the amount of solvent eliminated as a function of the annealing parameters is important to design a proper treatment to ensure complete solvent elimination, crucial to obtain reproducible and stable material properties and therefore, device performance. Here we demonstrate, for the first time to our knowledge, the use of an organic distributed feedback (DFB) laser to monitor with high precision the amount of solvent extracted from a spin-coated polymer film as a function of the thermal annealing time. The polymer film of interest, polystyrene in the present work, is doped with a small amount of a laser dye as to constitute the active layer of the laser device and deposited over a reusable DFB resonator. It is shown that solvent elimination translates into shifts in the DFB laser wavelength, as a consequence of changes in film thickness and refractive index. The proposed method is expected to be applicable to other types of annealing treatments, polymer-solvent combinations or film deposition methods, thus constituting a valuable tool to accurately control the quality and reproducibility of solution-processed polymer thin films.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The purpose of this study was to identify the pedagogical knowledge relevant to the successful completion of a pie chart item. This purpose was achieved through the identification of the essential fluencies that 12–13-year-olds required for the successful solution of a pie chart item. Fluency relates to ease of solution and is particularly important in mathematics because it impacts on performance. Although the majority of students were successful on this multiple choice item, there was considerable divergence in the strategies they employed. Approximately two-thirds of the students employed efficient multiplicative strategies, which recognised and capitalised on the pie chart as a proportional representation. In contrast, the remaining one-third of students used a less efficient additive strategy that failed to capitalise on the representation of the pie chart. The results of our investigation of students’ performance on the pie chart item during individual interviews revealed that five distinct fluencies were involved in the solution process: conceptual (understanding the question), linguistic (keywords), retrieval (strategy selection), perceptual (orientation of a segment of the pie chart) and graphical (recognising the pie chart as a proportional representation). In addition, some students exhibited mild disfluencies corresponding to the five fluencies identified above. Three major outcomes emerged from the study. First, a model of knowledge of content and students for pie charts was developed. This model can be used to inform instruction about the pie chart and guide strategic support for students. Second, perceptual and graphical fluency were identified as two aspects of the curriculum, which should receive a greater emphasis in the primary years, due to their importance in interpreting pie charts. Finally, a working definition of fluency in mathematics was derived from students’ responses to the pie chart item.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we explore young children's development of mathematical knowledge and reasoning processes as they worked two modelling problems (the Butter Beans Problem and the Airplane Problem). The problems involve authentic situations that need to be interpreted and described in mathematical ways. Both problems include tables of data, together with background information containing specific criteria to be considered in the solution process. Four classes of third-graders (8 years of age) and their teachers participated in the 6-month program, which included preparatory modelling activities along with professional development for the teachers. In discussing our findings we address: (a) Ways in which the children applied their informal, personal knowledge to the problems; (b) How the children interpreted the tables of data, including difficulties they experienced; (c) How the children operated on the data, including aggregating and comparing data, and looking for trends and patterns; (c) How the children developed important mathematical ideas; and (d) Ways in which the children represented their mathematical understandings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Beavers are often found to be in conflict with human interests by creating nuisances like building dams on flowing water (leading to flooding), blocking irrigation canals, cutting down timbers, etc. At the same time they contribute to raising water tables, increased vegetation, etc. Consequently, maintaining an optimal beaver population is beneficial. Because of their diffusion externality (due to migratory nature), strategies based on lumped parameter models are often ineffective. Using a distributed parameter model for beaver population that accounts for their spatial and temporal behavior, an optimal control (trapping) strategy is presented in this paper that leads to a desired distribution of the animal density in a region in the long run. The optimal control solution presented, imbeds the solution for a large number of initial conditions (i.e., it has a feedback form), which is otherwise nontrivial to obtain. The solution obtained can be used in real-time by a nonexpert in control theory since it involves only using the neural networks trained offline. Proper orthogonal decomposition-based basis function design followed by their use in a Galerkin projection has been incorporated in the solution process as a model reduction technique. Optimal solutions are obtained through a "single network adaptive critic" (SNAC) neural-network architecture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, several simplification methods are presented for shape control of repetitive structures such as symmetrical, rotational periodic, linear periodic, chain and axisymmetrical structures. Some special features in the differential equations governing these repetitive structures are examined by considering the whole structures. Based on the special properties of the governing equations, several methods are presented for simplifying their solution process. Finally, the static shape control of a cantilever symmetrical plate with piezoelectric actuator patches is demonstrated using the present simplification method. The result shows that present methods can effectively be used to find the optimal control voltage for shape control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A theory of two-point boundary value problems analogous to the theory of initial value problems for stochastic ordinary differential equations whose solutions form Markov processes is developed. The theory of initial value problems consists of three main parts: the proof that the solution process is markovian and diffusive; the construction of the Kolmogorov or Fokker-Planck equation of the process; and the proof that the transistion probability density of the process is a unique solution of the Fokker-Planck equation.

It is assumed here that the stochastic differential equation under consideration has, as an initial value problem, a diffusive markovian solution process. When a given boundary value problem for this stochastic equation almost surely has unique solutions, we show that the solution process of the boundary value problem is also a diffusive Markov process. Since a boundary value problem, unlike an initial value problem, has no preferred direction for the parameter set, we find that there are two Fokker-Planck equations, one for each direction. It is shown that the density of the solution process of the boundary value problem is the unique simultaneous solution of this pair of Fokker-Planck equations.

This theory is then applied to the problem of a vibrating string with stochastic density.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Single crystalline Bi2S3 With various morphologies (wires, rods, and flowers) has been successfully prepared via a simple polyol solution process and characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM), and transmission electron microscopy (TEM) techniques. The morphologies of Bi2S3 crystals are highly dependent on the experimental parameters, including the reaction temperature, reactant ratio, sulfur source, and additive. The adjustment of these parameters can lead to an obvious shape evolution of products, and the growth mechanism has been proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Perfectly hydrophobic (PHO) coatings consisting of silicone nanofibers have been obtained via a solution process using methyltrialkoxysilanes as precursors. On the basis of thermal gravimetry and differential thermal analysis (TG-DTA) and Fourier transform infrared spectroscopy (FTIR) results, the formula of the nanofibers was tentatively given and a possible growth mechanism of the nanofibers was proposed. Because of the low affinity between the coatings and the small water droplet, when using these coatings as substrate for collecting water vapor, the harvesting efficiency could be enhanced as compared with those from bare glass substrate for more than 50% under 25 degrees C and 60-90% relative humidity. By removing the surface methyl group by heat treatment or ultraviolet (UV) irradiation, the as-prepared perfectly hydrophobic surface can be converted into a superhydrophilic surface.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Polycrystalline nanotubular Bi2Te3 could be prepared via a high-temperature solution process using nanoscale tellurium, decomposed from trioctylphosphine oxide (TOPO) extracted tellurium species (Te-TOPO), as sacrificial template. The formation of such tubular structure is believed to be the result of outward diffusion of Te during the alloying process. The electrical properties (Seebeck coefficient and electrical conductivity) of the polycrystalline nanotubular Bi2Te3 have been studied and the experimental results show that the electrical conductivity is approximately three orders of magnitude smaller than bulk bismuth telluride materials mainly due to the much larger resistance brought by the insufficient contact between the nanotubular structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multilevel approaches to computational problems are pervasive across many areas of applied mathematics and scientific computing. The multilevel paradigm uses recursive coarsening to create a hierarchy of approximations to the original problem, then an initial solution is found for the coarsest problem and iteratively refined and improved at each level, coarsest to finest. The solution process is aided by the global perspective (or `global view') imparted to the optimisation by the coarsening. This paper looks at their application to the Vehicle Routing Problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The solution process for diffusion problems usually involves the time development separately from the space solution. A finite difference algorithm in time requires a sequential time development in which all previous values must be determined prior to the current value. The Stehfest Laplace transform algorithm, however, allows time solutions without the knowledge of prior values. It is of interest to be able to develop a time-domain decomposition suitable for implementation in a parallel environment. One such possibility is to use the Laplace transform to develop coarse-grained solutions which act as the initial values for a set of fine-grained solutions. The independence of the Laplace transform solutions means that we do indeed have a time-domain decomposition process. Any suitable time solver can be used for the fine-grained solution. To illustrate the technique we shall use an Euler solver in time together with the dual reciprocity boundary element method for the space solution

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents two multilevel refinement algorithms for the capacitated clustering problem. Multilevel refinement is a collaborative technique capable of significantly aiding the solution process for optimisation problems. The central methodologies of the technique are filtering solutions from the search space and reducing the level of problem detail to be considered at each level of the solution process. The first multilevel algorithm uses a simple tabu search while the other executes a standard local search procedure. Both algorithms demonstrate that the multilevel technique is capable of aiding the solution process for this combinatorial optimisation problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Composite damage modelling with cohesive elements has initially been limited to the analysis of interface damage or delamination. However, their use is also being extended to the analysis of inplane tensile failure arising from matrix or fibre fracture. These interface elements are typically placed at locations where failure is likely to occur, which infers a certain a priori knowledge of the crack propagation path(s). In the case of a crack jump for example, the location of the jump is usually not obvious, and the simulation would require the placement of cohesive elements at all element faces. A better option, presented here, is to determine the potential location of cohesive elements and insert them during the analysis. The aim of this work is to enable the determination of the crack path, as part of the solution process. A subroutine has been developed and implemented in the commercial finite element package ABAQUS/Standard[1] in order to automatically insert cohesive elements within a pristine model, on the basis of the analysis of the current stress field. Results for the prediction of delamination are presented in this paper.