36 resultados para FREE-VIBRATION ANALYSIS
Resumo:
The overall operation and internal complexity of a particular production machinery can be depicted in terms of clusters of multidimensional points which describe the process states, the value in each point dimension representing a measured variable from the machinery. The paper describes a new cluster analysis technique for use with manufacturing processes, to illustrate how machine behaviour can be categorised and how regions of good and poor machine behaviour can be identified. The cluster algorithm presented is the novel mean-tracking algorithm, capable of locating N-dimensional clusters in a large data space in which a considerable amount of noise is present. Implementation of the algorithm on a real-world high-speed machinery application is described, with clusters being formed from machinery data to indicate machinery error regions and error-free regions. This analysis is seen to provide a promising step ahead in the field of multivariable control of manufacturing systems.
Resumo:
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.
Resumo:
Recent laboratory observations and advances in theoretical quantum chemistry allow a reappraisal of the fundamental mechanisms that determine the water vapour self-continuum absorption throughout the infrared and millimetre wave spectral regions. By starting from a framework that partitions bimolecular interactions between water molecules into free-pair states, true bound and quasi-bound dimers, we present a critical review of recent observations, continuum models and theoretical predictions. In the near-infrared bands of the water monomer, we propose that spectral features in recent laboratory-derived self-continuum can be well explained as being due to a combination of true bound and quasi-bound dimers, when the spectrum of quasi-bound dimers is approximated as being double the broadened spectrum of the water monomer. Such a representation can explain both the wavenumber variation and the temperature dependence. Recent observations of the self-continuum absorption in the windows between these near-infrared bands indicate that widely used continuum models can underestimate the true strength by around an order of magnitude. An existing far-wing model does not appear able to explain the discrepancy, and although a dimer explanation is possible, currently available observations do not allow a compelling case to be made. In the 8–12 micron window, recent observations indicate that the modern continuum models either do not properly represent the temperature dependence, the wavelength variation, or both. The temperature dependence is suggestive of a transition from the dominance of true bound dimers at lower temperatures to quasibound dimers at higher temperatures. In the mid- and far-infrared spectral region, recent theoretical calculations indicate that true bound dimers may explain at least between 20% and 40% of the observed self-continuum. The possibility that quasi-bound dimers could cause an additional contribution of the same size is discussed. Most recent theoretical considerations agree that water dimers are likely to be the dominant contributor to the self-continuum in the mm-wave spectral range.
Resumo:
Root-knot nematodes (Meloidogyne spp.) are the most significant plant-parasitic nematodes that damage many crops all over the world. The free-living second stage juvenile (J2) is the infective stage that enters plants. The J2s move in the soil water films to reach the root zone. The bacterium Pasteuria penetrans is an obligate parasite of root-knot nematodes, is cosmopolitan, frequently encountered in many climates and environmental conditions and is considered promising for the control of Meloidogyne spp. The infection potential of P. penetrans to nematodes is well studied but not the attachment effects on the movement of root-knot nematode juveniles, image analysis techniques were used to characterize movement of individual juveniles with or without P. penetrans spores attached to their cuticles. Methods include the study of nematode locomotion based on (a) the centroid body point, (b) shape analysis and (c) image stack analysis. All methods proved that individual J2s without P. penetrans spores attached have a sinusoidal forward movement compared with those encumbered with spores. From these separate analytical studies of encumbered and unencumbered nematodes, it was possible to demonstrate how the presence of P. penetrans spores on a nematode body disrupted the normal movement of the nematode.
Resumo:
Basic Network transactions specifies that datagram from source to destination is routed through numerous routers and paths depending on the available free and uncongested paths which results in the transmission route being too long, thus incurring greater delay, jitter, congestion and reduced throughput. One of the major problems of packet switched networks is the cell delay variation or jitter. This cell delay variation is due to the queuing delay depending on the applied loading conditions. The effect of delay, jitter accumulation due to the number of nodes along transmission routes and dropped packets adds further complexity to multimedia traffic because there is no guarantee that each traffic stream will be delivered according to its own jitter constraints therefore there is the need to analyze the effects of jitter. IP routers enable a single path for the transmission of all packets. On the other hand, Multi-Protocol Label Switching (MPLS) allows separation of packet forwarding and routing characteristics to enable packets to use the appropriate routes and also optimize and control the behavior of transmission paths. Thus correcting some of the shortfalls associated with IP routing. Therefore MPLS has been utilized in the analysis for effective transmission through the various networks. This paper analyzes the effect of delay, congestion, interference, jitter and packet loss in the transmission of signals from source to destination. In effect the impact of link failures, repair paths in the various physical topologies namely bus, star, mesh and hybrid topologies are all analyzed based on standard network conditions.
Resumo:
The interactions between shear-free turbulence in two regions (denoted as + and − on either side of a nearly flat horizontal interface are shown here to be controlled by several mechanisms, which depend on the magnitudes of the ratios of the densities, ρ+/ρ−, and kinematic viscosities of the fluids, μ+/μ−, and the root mean square (r.m.s.) velocities of the turbulence, u0+/u0−, above and below the interface. This study focuses on gas–liquid interfaces so that ρ+/ρ− ≪ 1 and also on where turbulence is generated either above or below the interface so that u0+/u0− is either very large or very small. It is assumed that vertical buoyancy forces across the interface are much larger than internal forces so that the interface is nearly flat, and coupling between turbulence on either side of the interface is determined by viscous stresses. A formal linearized rapid-distortion analysis with viscous effects is developed by extending the previous study by Hunt & Graham (J. Fluid Mech., vol. 84, 1978, pp. 209–235) of shear-free turbulence near rigid plane boundaries. The physical processes accounted for in our model include both the blocking effect of the interface on normal components of the turbulence and the viscous coupling of the horizontal field across thin interfacial viscous boundary layers. The horizontal divergence in the perturbation velocity field in the viscous layer drives weak inviscid irrotational velocity fluctuations outside the viscous boundary layers in a mechanism analogous to Ekman pumping. The analysis shows the following. (i) The blocking effects are similar to those near rigid boundaries on each side of the interface, but through the action of the thin viscous layers above and below the interface, the horizontal and vertical velocity components differ from those near a rigid surface and are correlated or anti-correlated respectively. (ii) Because of the growth of the viscous layers on either side of the interface, the ratio uI/u0, where uI is the r.m.s. of the interfacial velocity fluctuations and u0 the r.m.s. of the homogeneous turbulence far from the interface, does not vary with time. If the turbulence is driven in the lower layer with ρ+/ρ− ≪ 1 and u0+/u0− ≪ 1, then uI/u0− ~ 1 when Re (=u0−L−/ν−) ≫ 1 and R = (ρ−/ρ+)(v−/v+)1/2 ≫ 1. If the turbulence is driven in the upper layer with ρ+/ρ− ≪ 1 and u0+/u0− ≫ 1, then uI/u0+ ~ 1/(1 + R). (iii) Nonlinear effects become significant over periods greater than Lagrangian time scales. When turbulence is generated in the lower layer, and the Reynolds number is high enough, motions in the upper viscous layer are turbulent. The horizontal vorticity tends to decrease, and the vertical vorticity of the eddies dominates their asymptotic structure. When turbulence is generated in the upper layer, and the Reynolds number is less than about 106–107, the fluctuations in the viscous layer do not become turbulent. Nonlinear processes at the interface increase the ratio uI/u0+ for sheared or shear-free turbulence in the gas above its linear value of uI/u0+ ~ 1/(1 + R) to (ρ+/ρ−)1/2 ~ 1/30 for air–water interfaces. This estimate agrees with the direct numerical simulation results from Lombardi, De Angelis & Bannerjee (Phys. Fluids, vol. 8, no. 6, 1996, pp. 1643–1665). Because the linear viscous–inertial coupling mechanism is still significant, the eddy motions on either side of the interface have a similar horizontal structure, although their vertical structure differs.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
Bran is hygroscopic and competes actively for water with other key components in baked cereal products like starch and gluten. Thermogravimetric analysis (TGA) of flour–water mixtures enriched with bran at different incorporation levels was performed to characterise the release of compartmentalised water. TGA investigations showed that the presence of bran increased compartmentalised water, with the measurement of an increase of total water loss from 58.30 ± 1.93% for flour only systems to 71.80 ± 0.37% in formulations comprising 25% w/w bran. Deconvolution of TGA profiles showed an alteration of the distribution of free and bound water, and its interaction with starch and gluten, within the formulations. TGA profiles showed that water release from bran-enriched flour is a prolonged event with respect to the release from non-enriched flour, which suggests the possibility that bran may interrupt the normal characteristic processes of texture formation that occur in non-enriched products.
Resumo:
Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.
Resumo:
The prevalence of Escherichia coli O157:H7 infection in birds is low but several deliberate inoculation studies show that poultry are readily and persistently infected by this organism indicating a possible threat to public health. The mechanisms of colonisation of poultry are not understood and the aim is to establish models to study the interaction of E. coli O157:H7, at the cellular and whole animal levels. A non-toxigenic E. coli O157:H7 (NCTC 12900) was used in adherence assays with an avian epithelial cell line (Div-1) and used to inoculate 1-day-old SPF chicks. In vitro, NCTC 12900 induced micro-colonies associated with cytoskeletal arrangements and pedestal formation with intimate bacterial attachment. In the 1-day-old SPF chick, a dose of 1 x 10(5) cfu resulted in rapid and extensive colonisation of the gastrointestinal tract and transient colonisation of the liver and spleen. The number of E. coli O157:H7 organisms attained approximately 10(8) cfu/ml caecal homogenate 24 h after inoculation and approximately 10(7) cfu/ml caecal homogenate was still present at day 92. Faecal shedding persisted for 169 days, ceasing 9 days after the birds came into lay and 6% of eggs were contaminated on the eggshell. Histological analysis of tissue samples from birds dosed with 1 x 10(7) cfu gave evidence for E coli O157:H7 NCTC 12900 induced micro-colonies on the caecal mucosa, although evidence for attaching effacing lesions was equivocal. These models may be suitable to study those factors of E. coli O157:H7 that mediate persistent colonisation in avian species.
Resumo:
Shiga toxin (Stx)-positive Escherichia coli O157:117 readily colonize and persist in specific-pathogen-free (SPF) chicks, and we have shown that an Stx-negative E. coli O157:117 isolate (NCTC12900) readily colonizes SPF chicks for up to 169 days after oral inoculation at 1 day of age. However, the role of intimin in the persistent colonization of poultry remains unclear. Thus, to investigate the role of intimin and flagella, which is a known factor in the persistence of non-O157 E. coli in poultry, isogenic single- and double-intimin and aflagellar mutants were constructed in E. coli O157:117 isolate NCTC12900. These mutants were used to inoculate (10(5) CFU) 1-day-old SPF chicks. In general, significant attenuation of the aflagellate and intiminaflagellate mutants, but not the intimin mutant, was noted at similar time points between 22 and 92 days after inoculation. The intimin-deficient mutant was still being shed at the end of the experiment, which was 211 days after inoculation, 84 days more than the wild type. Shedding of the aflagellar and intimin-aflagellar mutants ceased 99 and 113 days after inoculation, respectively. Histological analysis of gastrointestinal tissues from inoculated birds gave no evidence for true microcolony formation by NCTC12900 or intimin and aflagellar mutants to epithelial cells. However, NCTC12900 mutant derivatives associated with the mucosa were observed as individual cells and/or as large aggregates. Association with luminal contents was also noted. These data suggest that O157 organisms do not require intimin for the persistent colonization of chickens, whereas flagella do play a role in this process.
Resumo:
Acrylamide forms during cooking and processing predominately from the reaction of free asparagine and reducing sugars in the Maillard reaction. The identification of low free asparagine and reducing sugar varieties of crops is therefore an important target. In this study, nine varieties of potato (French fry varieties Maris Piper (from two suppliers), Pentland Dell, King Edward, Daisy, and Markies; and chipping varieties Lady Claire, Lady Rosetta, Saturna, and Hermes) grown in the United Kingdom in 2009 were analyzed at monthly intervals through storage from November 2009 to July 2010. Acrylamide formation was measured in heated flour and chips fried in oil. Analysis of variance revealed significant interactions between varieties nested within type (French fry and chipping) and storage time for most free amino acids, glucose, fructose, and acrylamide formation. Acrylamide formed in chips correlated significantly with acrylamide formed in flour and with chip color. There were significant correlations between glucose or total reducing sugar concentration and acrylamide formation in both variety types, but with fructose the correlation was much stronger for chipping than for French fry varieties. Conversely, there were significant correlations with acrylamide formation for both total free amino acid and free asparagine concentration in the French fry but not chipping varieties. The study showed the potential of variety selection for preventing unacceptable levels of acrylamide formation in potato products and the variety-dependent effect of long-term storage on acrylamide risk. It also highlighted the complex relationship between precursor concentration and acrylamide risk in potatoes.
Resumo:
Objective: To describe the training undertaken by pharmacists employed in a pharmacist-led information technology-based intervention study to reduce medication errors in primary care (PINCER Trial), evaluate pharmacists’ assessment of the training, and the time implications of undertaking the training. Methods: Six pharmacists received training, which included training on root cause analysis and educational outreach, to enable them to deliver the PINCER Trial intervention. This was evaluated using self-report questionnaires at the end of each training session. The time taken to complete each session was recorded. Data from the evaluation forms were entered onto a Microsoft Excel spreadsheet, independently checked and the summary of results further verified. Frequencies were calculated for responses to the three-point Likert scale questions. Free-text comments from the evaluation forms and pharmacists’ diaries were analysed thematically. Key findings: All six pharmacists received 22 hours of training over five sessions. In four out of the five sessions, the pharmacists who completed an evaluation form (27 out of 30 were completed) stated they were satisfied or very satisfied with the various elements of the training package. Analysis of free-text comments and the pharmacists’ diaries showed that the principles of root cause analysis and educational outreach were viewed as useful tools to help pharmacists conduct pharmaceutical interventions in both the study and other pharmacy roles that they undertook. The opportunity to undertake role play was a valuable part of the training received. Conclusions: Findings presented in this paper suggest that providing the PINCER pharmacists with training in root cause analysis and educational outreach contributed to the successful delivery of PINCER interventions and could potentially be utilised by other pharmacists based in general practice to deliver pharmaceutical interventions to improve patient safety.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
Choices not only reflect our preference, but they also affect our behavior. The phenomenon of choice-induced preference change has been of interest to cognitive dissonance researchers in social psychology, and more recently, it has attracted the attention of researchers in economics and neuroscience. Preference modulation after the mere act of making a choice has been repeatedly demonstrated over the last 50 years by an experimental paradigm called the “free-choice paradigm.” However, Chen and Risen (2010) pointed out a serious methodological flaw in this paradigm, arguing that evidence for choice-induced preference change is still insufficient. Despite the flaw, studies using the traditional free-choice paradigm continue to be published without addressing the criticism. Here, aiming to draw more attention to this issue, we briefly explain the methodological problem, and then describe simple simulation studies that illustrate how the free-choice paradigm produces a systematic pattern of preference change consistent with cognitive dissonance, even without any change in true preference. Our stimulation also shows how a different level of noise in each phase of the free-choice paradigm independently contributes to the magnitude of artificial preference change. Furthermore, we review ways of addressing the critique and provide a meta-analysis to show the effect size of choice-induced preference change after addressing the critique. Finally, we review and discuss, based on the results of the stimulation studies, how the criticism affects our interpretation of past findings generated from the free-choice paradigm. We conclude that the use of the conventional free-choice paradigm should be avoided in future research and the validity of past findings from studies using this paradigm should be empirically re-established. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract)