982 resultados para DETERMINES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present work, solidification of a hyper-eutectic ammonium chloride solution in a bottom-cooled cavity (i.e. with stable thermal gradient) is numerically studied. A Rayleigh number based criterion is developed, which determines the conditions favorable for freckles formation. This criterion, when expressed in terms of physical properties and process parameters, yields the condition for plume formation as a function of concentration, liquid fraction, permeability, growth rate of a mushy layer and thermophysical properties. Subsequently, numerical simulations are performed for cases with initial and boundary conditions favoring freckle formation. The effects of parameters, such as cooling rate and initial concentration, on the formation and growth of freckles are investigated. It was found that a high cooling rate produced larger and more defined channels which are retained for a longer durations. Similarly, a lower initial concentration of solute resulted in fewer but more pronounced channels. The number and size of channels are also found to be related to the mushy zone thickness. The trends predicted with regard to the variation of number of channels with time under different process conditions are in accordance with the experimental observations reported in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CaO-SiO2-FeOx-P2O5-MgO bearing slags are typical in the basic oxygen steelmaking (BOS) process. The partition ratio of phosphorus between slag and steel is an index of the phosphorus holding capacity of the slag, which determines the phosphorus content achievable in the finished steel. The influences of FeO concentration and basicity on the equilibrium phosphorus partition ratios were experimentally determined at temperatures of 1873 and 1923 K, for conditions of MgO saturation. The partition ratio initially increased with basicity but attained a constant value beyond basicity of 2.5. An increase in FeO concentration up to approximately 13 to 14 mass pet was beneficial for phosphorus partition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Properties of nanoparticles are size dependent, and a model to predict particle size is of importance. Gold nanoparticles are commonly synthesized by reducing tetrachloroauric acid with trisodium citrate, a method pioneered by Turkevich et al (Discuss. Faraday Soc. 1951, 11, 55). Data from several investigators that used this method show that when the ratio of initial concentrations of citrate to gold is varied from 0.4 to similar to 2, the final mean size of the particles formed varies by a factor of 7, while subsequent increases in the ratio hardly have any effect on the size. In this paper, a model is developed to explain this widely varying dependence. The steps that lead to the formation of particles are as follows: reduction of Au3+ in solution, disproportionation of Au+ to gold atoms and their nucleation, growth by disproportionation on particle surface, and coagulation. Oxidation of citrate results in the formation of dicarboxy acetone, which aids nucleation but also decomposes into side products. A detailed kinetic model is developed on the basis of these steps and is combined with population balance to predict particle-size distribution. The model shows that, unlike the usual balance between nucleation and growth that determines the particle size, it is the balance between rate of nucleation and degradation of dicarboxy acetone that determines the particle size in the citrate process. It is this feature that is able to explain the unusual dependence of the mean particle size on the ratio of citrate to gold salt concentration. It is also found that coagulation plays an important role in determining the particle size at high concentrations of citrate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Construction and demolition (C&D) waste have negative impacts on the environment. As a significant proportion of C&D waste is related to the design stage of a project, there is an opportunity for architects to reduce the waste. However, research suggests that many architects often do not understand the impact of their design on waste generation. Training and education are proposed by current researchers to improve architects’ knowledge; however, this has not been adequately validated as a viable approach to solving waste issues. This research investigates architects’ perceptions towards waste management in the design phase, and determines whether they feel they are adequately skilled in reducing C&D waste. Questionnaire surveys were distributed to architects from 98 architectural firms and 25 completed surveys were returned. The results show that while architects are aware of the relationship between design and waste, ‘extra time’ and ‘lack of knowledge’ are the key barriers to implementing waste reduction strategies. In addition, the majority of respondents acknowledge their lack of skill to reduce waste through design evaluation. Therefore, training programmes can be a viable strategy to enable them to address the pressing issue of C&D waste reduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The resources of health systems are limited. There is a need for information concerning the performance of the health system for the purposes of decision-making. This study is about utilization of administrative registers in the context of health system performance evaluation. In order to address this issue, a multidisciplinary methodological framework for register-based data analysis is defined. Because the fixed structure of register-based data indirectly determines constraints on the theoretical constructs, it is essential to elaborate the whole analytic process with respect to the data. The fundamental methodological concepts and theories are synthesized into a data sensitive approach which helps to understand and overcome the problems that are likely to be encountered during a register-based data analyzing process. A pragmatically useful health system performance monitoring should produce valid information about the volume of the problems, about the use of services and about the effectiveness of provided services. A conceptual model for hip fracture performance assessment is constructed and the validity of Finnish registers as a data source for the purposes of performance assessment of hip fracture treatment is confirmed. Solutions to several pragmatic problems related to the development of a register-based hip fracture incidence surveillance system are proposed. The monitoring of effectiveness of treatment is shown to be possible in terms of care episodes. Finally, an example on the justification of a more detailed performance indicator to be used in the profiling of providers is given. In conclusion, it is possible to produce useful and valid information on health system performance by using Finnish register-based data. However, that seems to be far more complicated than is typically assumed. The perspectives given in this study introduce a necessary basis for further work and help in the routine implementation of a hip fracture monitoring system in Finland.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study takes as its premise the prominent social and cultural role that the couple relationship has acquired in modern society. Marriage as a social institution and romantic love as a cultural script have not lost their significance but during the last few decades the concept of relationship has taken prominence in our understanding of the love relationship. This change has taken place in a society governed by the therapeutic ethos. This study uses material ranging from in-depth interviews to various mass media texts to investigate the therapeutic logic that determines our understanding of the couple relationship. The central concept in this study is therapeutic relationship which does not refer to any particular type of relationship. In contemporary usage the relationship is, by definition, therapeutic. The therapeutic relationship is seen as an endless source of conflict and a highly complex dynamic unit in constant need of attention and treatment. Notwithstanding this emphasis on therapy and relationship work the therapeutic relationship lacks any morally or socially defined direction. Here lies the cultural power and according to critics the dubious aspect of the therapeutic ethos. For the therapeutic logic any reason for divorce is possible and plausible. Prosaically speaking the question is not whether to divorce or not, but when to divorce. In the end divorce only attests to the complexity of the relationship. The therapeutic understanding of the relationship gives the illusion that relationships with their tensions and conflicting emotions can be fully transferred to the sphere of transparency and therapeutic processing. This illusion created by relationship talk that emphasizes individual control is called omnipotence of the individual. However, the study shows that the individual omnipotence is inevitably limited and hence cracks appear in it. The cracks in the omnipotence show that while the therapeutic relationship based on the ideal of communication gives an individual a mode of speaking that stresses autonomy, equality and emotional gratification, it offers little help in expressing our fundamental dependence on other people. The study shows how strong an attraction the therapeutic ethos has with its grasp on the complexities of the relationship in a society where divorce is so common and the risk of divorce is collectively experienced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report a measurement of the top quark mass, m_t, obtained from ppbar collisions at sqrt(s) = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. We analyze a sample corresponding to an integrated luminosity of 1.9 fb^-1. We select events with an electron or muon, large missing transverse energy, and exactly four high-energy jets in the central region of the detector, at least one of which is tagged as coming from a b quark. We calculate a signal likelihood using a matrix element integration method, with effective propagators to take into account assumptions on event kinematics. Our event likelihood is a function of m_t and a parameter JES that determines /in situ/ the calibration of the jet energies. We use a neural network discriminant to distinguish signal from background events. We also apply a cut on the peak value of each event likelihood curve to reduce the contribution of background and badly reconstructed events. Using the 318 events that pass all selection criteria, we find m_t = 172.7 +/- 1.8 (stat. + JES) +/- 1.2 (syst.) GeV/c^2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Effects of non-polar, polar and proton-donating solvents on the n → π* transitions of C=O, C=S, NO2 and N=N groups have been investigated. The shifts of the absorption maxima in non-polar and polar solvents have been related to the electrostatic interactions between solute and solvent molecules, by employing the theory of McRAE. In solvents which can donate protons the solvent shifts are mainly determined by solute-solvent hydrogen bonding. Isobestic points have been found in the n → π* bonds of ethylenetrithio-carbonate in heptane-alcohol and heptane-chloroform solvent systems, indicating the existence of equilibria between the hydrogen bonded and the free species of the solute. Among the different proton-donating solvents studied water produces the largest blue-shifts. The blue-shifts in alcohols decrease in the order 2,2,2-trifluoroethanol, methanol, ethanol, isopropanol and t-butanol, the blue-shift in trifluoroethanol being nearly equal to that in water. This trend is exactly opposite to that for the self-association of alcohols. It is suggested that electron-withdrawing groups not merely decrease the extent of self-association of alcohols, but also increase the ability to donate hydrogen bonds. The approximate hydrogen-bond energies for several donor-acceptor systems have been estimated. In a series of aliphatio ketones and nitro compounds studied, the blue-shifts and consequently the hydrogen bond energies decrease with the decrease in the electron-withdrawing power of the alkyl groups. It is felt that electron-withdrawing groups render the chromophores better proton acceptors, and the alcohols better donors. A linear relationship between n → π* transition frequency and the infrared frequency of ethylenetrithiocarbonate has been found. It is concluded that stabilization of the electronic ground states of solute molecules by electrostatic and/or hydrogen-bond interactions determines the solvent shifts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The question at issue in this dissertation is the epistemic role played by ecological generalizations and models. I investigate and analyze such properties of generalizations as lawlikeness, invariance, and stability, and I ask which of these properties are relevant in the context of scientific explanations. I will claim that there are generalizable and reliable causal explanations in ecology by generalizations, which are invariant and stable. An invariant generalization continues to hold or be valid under a special change called an intervention that changes the value of its variables. Whether a generalization remains invariant during its interventions is the criterion that determines whether it is explanatory. A generalization can be invariant and explanatory regardless of its lawlike status. Stability deals with a generality that has to do with holding of a generalization in possible background conditions. The more stable a generalization, the less dependent it is on background conditions to remain true. Although it is invariance rather than stability of generalizations that furnishes us with explanatory generalizations, there is an important function that stability has in this context of explanations, namely, stability furnishes us with extrapolability and reliability of scientific explanations. I also discuss non-empirical investigations of models that I call robustness and sensitivity analyses. I call sensitivity analyses investigations in which one model is studied with regard to its stability conditions by making changes and variations to the values of the model s parameters. As a general definition of robustness analyses I propose investigations of variations in modeling assumptions of different models of the same phenomenon in which the focus is on whether they produce similar or convergent results or not. Robustness and sensitivity analyses are powerful tools for studying the conditions and assumptions where models break down and they are especially powerful in pointing out reasons as to why they do this. They show which conditions or assumptions the results of models depend on. Key words: ecology, generalizations, invariance, lawlikeness, philosophy of science, robustness, explanation, models, stability

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex web of interactions between the host immune system and the pathogen determines the outcome of any infection. A computational model of this interaction network, which encodes complex interplay among host and bacterial components, forms a useful basis for improving the understanding of pathogenesis, in filling knowledge gaps and consequently to identify strategies to counter the disease. We have built an extensive model of the Mycobacterium tuberculosis host-pathogen interactome, consisting of 75 nodes corresponding to host and pathogen molecules, cells, cellular states or processes. Vaccination effects, clearance efficiencies due to drugs and growth rates have also been encoded in the model. The system is modelled as a Boolean network. Virtual deletion experiments, multiple parameter scans and analysis of the system's response to perturbations, indicate that disabling processes such as phagocytosis and phagolysosome fusion or cytokines such as TNF-alpha and IFN-gamma, greatly impaired bacterial clearance, while removing cytokines such as IL-10 alongside bacterial defence proteins such as SapM greatly favour clearance. Simulations indicate a high propensity of the pathogen to persist under different conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The liquidity crisis that swept through the financial markets in 2007 triggered multi-billion losses and forced buyouts of some large banks. The resulting credit crunch is sometimes compared to the great recession in the early twentieth century. But the crisis also serves as a reminder of the significance of the interbank market and of proper central bank policy in this market. This thesis deals with implementation of monetary policy in the interbank market and examines how central bank tools affect commercial banks' decisions. I answer the following questions: • What is the relationship between the policy setup and interbank interest rate volatility? (averaging reserve requirement reduces the volatility) • What can explain a weak relationship between market liquidity and the interest rate? (high reserve requirement buffer) • What determines banks' decisions on when to satisfy the reserve requirement? (market frictions) • How did the liquidity crisis that began in 2007 affect interbank market behaviour? (resulted in higher credit risk and trading frictions as well as expected liquidity shortage)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tanner Graph representation of linear block codes is widely used by iterative decoding algorithms for recovering data transmitted across a noisy communication channel from errors and erasures introduced by the channel. The stopping distance of a Tanner graph T for a binary linear block code C determines the number of erasures correctable using iterative decoding on the Tanner graph T when data is transmitted across a binary erasure channel using the code C. We show that the problem of finding the stopping distance of a Tanner graph is hard to approximate within any positive constant approximation ratio in polynomial time unless P = NP. It is also shown as a consequence that there can be no approximation algorithm for the problem achieving an approximation ratio of 2(log n)(1-epsilon) for any epsilon > 0 unless NP subset of DTIME(n(poly(log n))).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the structural elements like beams, cables etc. are flexible and should be modeled as distributed parameter systems (DPS) to represent the reality better. For large structures, the usual approach of 'modal representation' is not an accurate representation. Moreover, for excessive vibrations (possibly due to strong wind, earthquake etc.), external power source (controller) is needed to suppress it, as the natural damping of these structures is usually small. In this paper, we propose to use a recently developed optinial dynamic inversion technique to design a set of discrete controllers for this purpose. We assume that the control force to the structure is applied through finite number of actuators, which are located at predefined locations in the spatial domain. The method used in this paper determines control forces directly from the partial differential equation (PDE) model of the system. The formulation has better practical significance, both because it leads to a closed form solution of the controller (hence avoids computational issues) as well as because a set of discrete actuators along the spatial domain can be implemented with relative ease (as compared to a continuous actuator).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Upwind-Least Squares Finite Difference (LSFD-U) scheme has been successfully applied for inviscid flow computations. In the present work, we extend the procedure for computing viscous flows. Different ways of discretizing the viscous fluxes are analysed for the positivity, which determines the robustness of the solution procedure. The scheme which is found to be more positive is employed for viscous flux computation. The numerical results for validating the procedure are presented.