913 resultados para Averaging Principle
Resumo:
Ammonium perchlorate (AP) has been coated with polystyrene (PS), cellulose acetate (CA), Novolak resin and polymethylmethacrylate (PMMA) by a solvent/nonsolvent method which makes use of the coacervation principle. The effect of polymer coating on AP decomposition has been studied using thermogravimetry (TG) and differential thermal analysis (DTA). Polymer coating results in the desensitization of AP decomposition. The observed effect has been attributed to the thermophysical and thermochemical properties of the polymer used for coating. The effect of polystyrene coating on thermal decomposition of aluminium perchlorate trihydrazinate and ammonium nitrate as well as on the combustion of AP-CTPB composite propellants has been studied.
Resumo:
Driving can be a lonely activity. While there has been a lot of research and technical inventions concerning car-to-car communication and passenger entertainment, there is still little work concerning connecting drivers. Whereas tourism is very much a social activity, drive tourists have few options to communicate with fellow travellers. The proposed project is placed at the intersection of tourism and driving and aims to enhance the trip experience during driving through social interaction. This thesis explores how a mobile application that allows instant messaging between travellers sharing similar context can add to road trip experiences. To inform the design of such an application, the project adopted the principle of the user-centred design process. User needs were assessed by running an ideation workshop and a field trip. Findings of both studies have shown that tourists have different preferences and diverse attitudes towards contacting new people. Yet all participants stressed the value of social recommendations. Based on those results and a later expert review, three prototype versions of the system were created. A prototyping session with potential end users highlighted the most important features including the possibility to view user profiles, choose between text and audio input and receive up-to-date information. An implemented version of the prototype was evaluated in an exploratory study to identify usability related problems in an actual use case scenario as well as to find implementation bugs. The outcomes of this research are relevant for the design of future mobile tourist guides that leverage from benefits of social recommendations.
Resumo:
The utility of near infrared spectroscopy as a non-invasive technique for the assessment of internal eating quality parameters of mandarin fruit (Citrus reticulata cv. Imperial) was assessed. The calibration procedure for the attributes of TSS (total soluble solids) and DM (dry matter) was optimised with respect to a reference sampling technique, scan averaging, spectral window, data pre-treatment (in terms of derivative treatment and scatter correction routine) and regression procedure. The recommended procedure involved sampling of an equatorial position on the fruit with 1 scan per spectrum, and modified partial least squares model development on a 720–950-nm window, pre-treated as first derivative absorbance data (gap size of 4 data points) with standard normal variance and detrend scatter correction. Calibration model performance for the attributes of TSS and DM content was encouraging (typical Rc2 of >0.75 and 0.90, respectively; typical root mean squared standard error of calibration of <0.4 and 0.6%, respectively), whereas that for juiciness and total acidity was unacceptable. The robustness of the TSS and DM calibrations across new populations of fruit is documented in a companion study.
Resumo:
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.
Resumo:
The attempt to refer meaningful reality as a whole to a unifying ultimate principle - the quest for the unity of Being - was one of the basic tendencies of Western philosophy from its beginnings in ancient Greece up to Hegel's absolute idealism. However, the different trends of contemporary philosophy tend to regard such a speculative metaphysical quest for unity as obsolete. This study addresses this contemporary situation on the basis of the work of Martin Heidegger (1889-1976). Its methodological framework is Heidegger's phenomenological and hermeneutical approach to the history of philosophy. It seeks to understand, in terms of the metaphysical quest for unity, Heidegger's contrast between the first (Greek) beginning or "onset" (Anfang) of philosophy and another onset of thinking. This other onset is a possibility inherent in the contemporary situation in which, according to Heidegger, the metaphysical tradition has developed to its utmost limits and thereby come to an end. Part I is a detailed interpretation of the surviving fragments of the Poem of Parmenides of Elea (fl. c. 500 BC), an outstanding representative of the first philosophical beginning in Heidegger's sense. It is argued that the Poem is not a simple denial of apparent plurality and difference ("mortal acceptances," doxai) in favor of an extreme monism. Parmenides' point is rather to show in what sense the different instances of Being can be reduced to an absolute level of truth or evidence (aletheia), which is the unity of Being as such (to eon). What in prephilosophical human experience is accepted as being is referred to the source of its acceptability: intelligibility as such, the simple and undifferentiated presence to thinking that ultimately excludes unpresence and otherness. Part II interprets selected key texts from different stages in Heidegger's thinking in terms of the unity of Being. It argues that one aspect of Heidegger's sustained and gradually deepening philosophical quest was to think the unity of Being as singularity, as the instantaneous, context-specific, and differential unity of a temporally meaningful situation. In Being and Time (1927) Heidegger articulates the temporal situatedness of the human awareness of meaningful presence. His later work moves on to study the situational correlation between presence and the human awareness. Heidegger's "postmetaphysical" articulation seeks to show how presence becomes meaningful precisely as situated, in an event of differentiation from a multidimensional context of unpresence. In resigning itself to this irreducibly complicated and singular character of meaningful presence, philosophy also faces its own historically situated finitude. This resignation is an essential feature of Heidegger's "other onset" of thinking.
Resumo:
As part of the 2014 amendments to the Youth Justice Act 1992 (Qld) the previous Queensland government introduced a new breach of bail offence and a reverse onus provision in relation to the new offence. Also included in the raft of amendments was a provision removing the internationally accepted principle that, in relation to young offenders, detention should be used as ‘a last resort’. This article argues that these changes are likely to increase the entrenchment of young people within the criminal justice system.
Resumo:
It is now well known that in extreme quantum limit, dominated by the elastic impurity scattering and the concomitant quantum interference, the zero-temperature d.c. resistance of a strictly one-dimensional disordered system is non-additive and non-self-averaging. While these statistical fluctuations may persist in the case of a physically thin wire, they are implicitly and questionably ignored in higher dimensions. In this work, we have re-examined this question. Following an invariant imbedding formulation, we first derive a stochastic differential equation for the complex amplitude reflection coefficient and hence obtain a Fokker-Planck equation for the full probability distribution of resistance for a one-dimensional continuum with a Gaussian white-noise random potential. We then employ the Migdal-Kadanoff type bond moving procedure and derive the d-dimensional generalization of the above probability distribution, or rather the associated cumulant function –‘the free energy’. For d=3, our analysis shows that the dispersion dominates the mobilitly edge phenomena in that (i) a one-parameter B-function depending on the mean conductance only does not exist, (ii) an approximate treatment gives a diffusion-correction involving the second cumulant. It is, however, not clear whether the fluctuations can render the transition at the mobility edge ‘first-order’. We also report some analytical results for the case of the one dimensional system in the presence of a finite electric fiekl. We find a cross-over from the exponential to the power-low length dependence of resistance as the field increases from zero. Also, the distribution of resistance saturates asymptotically to a poissonian form. Most of our analytical results are supported by the recent numerical simulation work reported by some authors.
Resumo:
Drop formation at conical tips which is of relevance to metallurgists is investigated based on the principle of minimization of free energy using the variational approach. The dimensionless governing equations for drop profiles are computer solved using the fourth order Runge-Kutta method. For different cone angles, the theoretical plots of XT and ZT vs their ratio, are statistically analyzed, where XT and ZT are the dimensionless x and z coordinates of the drop profile at a plane at the conical tip, perpendicular to the axis of symmetry. Based on the mathematical description of these curves, an absolute method has been proposed for the determination of surface tension of liquids, which is shown to be preferable in comparison with the earlier pendent-drop profile methods.
Resumo:
In this paper, we present results on water flow past randomly textured hydrophobic surfaces with relatively large surface features of the order of 50 µm. Direct shear stress measurements are made on these surfaces in a channel configuration. The measurements indicate that the flow rates required to maintain a shear stress value vary substantially with water immersion time. At small times after filling the channel with water, the flow rates are up to 30% higher compared with the reference hydrophilic surface. With time, the flow rate gradually decreases and in a few hours reaches a value that is nearly the same as the hydrophilic case. Calculations of the effective slip lengths indicate that it varies from about 50 µm at small times to nearly zero or “no slip” after a few hours. Large effective slip lengths on such hydrophobic surfaces are known to be caused by trapped air pockets in the crevices of the surface. In order to understand the time dependent effective slip length, direct visualization of trapped air pockets is made in stationary water using the principle of total internal reflection of light at the water-air interface of the air pockets. These visualizations indicate that the number of bright spots corresponding to the air pockets decreases with time. This type of gradual disappearance of the trapped air pockets is possibly the reason for the decrease in effective slip length with time in the flow experiments. From the practical point of usage of such surfaces to reduce pressure drop, say, in microchannels, this time scale of the order of 1 h for the reduction in slip length would be very crucial. It would ultimately decide the time over which the surface can usefully provide pressure drop reductions. ©2009 American Institute of Physics
Resumo:
Compared to grain sorghums, sweet sorghums typically have lower grain yield and thick, tall stalks which accumulate high levels of sugar (sucrose, fructose and glucose). Unlike commercial grain sorghum (S. bicolor ssp. bicolor) cultivars, which are usually F1 hybrids, commercial sweet sorghums were selected as wild accessions or have undergone limited plant breeding. Although all sweet sorghums are classified within S. bicolor ssp. bicolor, their genetic relationship with grain sorghums is yet to be investigated. Ninety-five genotypes, including 31 sweet sorghums and 64 grain sorghums, representing all five races within the subspecies bicolor, were screened with 277 polymorphic amplified fragment length polymorphism (AFLP) markers. Cluster analysis separated older sweet sorghum accessions (collected in mid 1800s) from those developed and released during the early to mid 1900s. These groups were emphasised in a principle component analysis of the results such that sweet sorghum lines were largely distinguished from the others, particularly by a group of markers located on sorghum chromosomes SBI-08 and SBI-10. Other studies have shown that QTL and ESTs for sugar-related traits, as well as for height and anthesis, map to SBI-10. Although the clusters obtained did not group clearly on the basis of racial classification, the sweet sorghum lines often cluster with grain sorghums of similar racial origin thus suggesting that sweet sorghum is of polyphyletic origin within S. bicolor ssp. bicolor.
Resumo:
Electromagnetically induced transparency (EIT) experiments in Lambda-type systems benefit from the use of hot vapor where the thermal averaging results in reducing the width of the EIT resonance well below the natural linewidth. Here, we demonstrate a technique for further reducing the EIT width in room-temperature vapor by the application of a small longitudinal magnetic field. The Zeeman shift of the energy levels results in the formation of several shifted subsystems; the net effect is to create multiple EIT dips each of which is significantly narrower than the original resonance. We observe a reduction by a factor of 3 in the D2 line of 87Rb with a field of 3.2 G.
Resumo:
This project was a step forward in improving the voltage profile of traditional low voltage distribution networks with high photovoltaic generation or high peak demand. As a practical and economical solution, the developed methods use a Dynamic Voltage Restorer or DVR, which is a series voltage compensator, for continuous and communication-less power quality enhancement. The placement of DVR in the network is optimised in order to minimise its power rating and cost. In addition, new approaches were developed for grid synchronisation and control of DVR which are integrated with the voltage quality improvement algorithm for stable operation.
Resumo:
A rich source of Japanese jurisprudence on sexual equality underlies Japan's emerging law against sexual harassment. With no law specifically outlawing sexual harassment, academics and the courts have invoked the principle of sexual equality to support their conclusion that Japanese law carries an implicit prohibition against acts of sexual harassment. In developing a legal case against sexual harassment, Japanese courts and academic commentators have introduced novel constructions of equality. The key innovations include relational equality, inherent equality and quantifiable equality. In presenting some of these Japanese contributions to equality jurisprudence, the hope is that feminist discourse on equality can take place in a broader context-a context that does not ignore the Eastern cultural experience.
Resumo:
Three drafts of Bos indicus cross steers (initially 178-216 kg) grazed Leucaena-grass pasture [Leucaena leucocephala subspecies glabrata cv. Cunningham with green panic (Panicum maximum cv. trichoglume)] from late winter through to autumn during three consecutive years in the Burnett region of south-east Queensland. Measured daily weight gain (DWGActual) of the steers was generally 0.7-1.1 kg/day during the summer months. Estimated intakes of metabolisable energy and dry matter (DM) were calculated from feeding standards as the intakes required by the steers to grow at the DWGActual. Diet attributes were predicted from near infrared reflectance spectroscopy spectra of faeces (F.NIRS) using established calibration equations appropriate for northern Australian forages. Inclusion of some additional reference samples from cattle consuming Leucaena diets into F.NIRS calibrations based on grass and herbaceous legume-grass pastures improved prediction of the proportion of Leucaena in the diet. Mahalanobis distance values supported the hypothesis that the F.NIRS predictions of diet crude protein concentration and DM digestibility (DMD) were acceptable. F.NIRS indicated that the percentage of Leucaena in the diet varied widely (10-99%). Diet crude protein concentration and DMD were usually high, averaging 12.4 and 62%, respectively, and were related asymptotically to the percentage of Leucaena in the diet (R2 = 0.48 and 0.33, respectively). F.NIRS calibrations for DWG were not satisfactory to predict this variable from an individual faecal sample since the s.e. of prediction were 0.33-0.40 kg/day. Cumulative steer liveweight (LW) predicted from F.NIRS DWG calibrations, which had been previously developed with tropical grass and grass-herbaceous legume pastures, greatly overestimated the measured steer LW; therefore, these calibrations were not useful. Cumulative steer LW predicted from a modified F.NIRS DWG calibration, which included data from the present study, was strongly correlated (R2 = 0.95) with steer LW but overestimated LW by 19-31 kg after 8 months. Additional reference data are needed to develop robust F.NIRS calibrations to encompass the diversity of Leucaena pastures of northern Australia. In conclusion, the experiment demonstrated that F.NIRS could improve understanding of diet quality and nutrient intake of cattle grazing Leucaena-grass pasture, and the relationships between nutrient supply and cattle growth.
Resumo:
Systems level modelling and simulations of biological processes are proving to be invaluable in obtaining a quantitative and dynamic perspective of various aspects of cellular function. In particular, constraint-based analyses of metabolic networks have gained considerable popularity for simulating cellular metabolism, of which flux balance analysis (FBA), is most widely used. Unlike mechanistic simulations that depend on accurate kinetic data, which are scarcely available, FBA is based on the principle of conservation of mass in a network, which utilizes the stoichiometric matrix and a biologically relevant objective function to identify optimal reaction flux distributions. FBA has been used to analyse genome-scale reconstructions of several organisms; it has also been used to analyse the effect of perturbations, such as gene deletions or drug inhibitions in silico. This article reviews the usefulness of FBA as a tool for gaining biological insights, advances in methodology enabling integration of regulatory information and thermodynamic constraints, and finally addresses the challenges that lie ahead. Various use scenarios and biological insights obtained from FBA, and applications in fields such metabolic engineering and drug target identification, are also discussed. Genome-scale constraint-based models have an immense potential for building and testing hypotheses, as well as to guide experimentation.