975 resultados para Coarse-graining
Resumo:
Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP synthesize fuzzy logic and ART networks by exploiting the formal similarity between the computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic: intersection (∩) with the fuzzy intersection (∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric: theory in which the fuzzy inter:>ec:tion and the fuzzy union (∨), or component-wise maximum, play complementary roles. Complement coding preserves individual feature amplitudes while normalizing the input vector, and prevents a potential category proliferation problem. Adaptive weights :otart equal to one and can only decrease in time. A geometric interpretation of fuzzy AHT represents each category as a box that increases in size as weights decrease. A matching criterion controls search, determining how close an input and a learned representation must be for a category to accept the input as a new exemplar. A vigilance parameter (p) sets the matching criterion and determines how finely or coarsely an ART system will partition inputs. High vigilance creates fine categories, represented by small boxes. Learning stops when boxes cover the input space. With fast learning, fixed vigilance, and an arbitrary input set, learning stabilizes after just one presentation of each input. A fast-commit slow-recode option allows rapid learning of rare events yet buffers memories against recoding by noisy inputs. Fuzzy ARTMAP unites two fuzzy ART networks to solve supervised learning and prediction problems. A Minimax Learning Rule controls ARTMAP category structure, conjointly minimizing predictive error and maximizing code compression. Low vigilance maximizes compression but may therefore cause very different inputs to make the same prediction. When this coarse grouping strategy causes a predictive error, an internal match tracking control process increases vigilance just enough to correct the error. ARTMAP automatically constructs a minimal number of recognition categories, or "hidden units," to meet accuracy criteria. An ARTMAP voting strategy improves prediction by training the system several times using different orderings of the input set. Voting assigns confidence estimates to competing predictions given small, noisy, or incomplete training sets. ARPA benchmark simulations illustrate fuzzy ARTMAP dynamics. The chapter also compares fuzzy ARTMAP to Salzberg's Nested Generalized Exemplar (NGE) and to Simpson's Fuzzy Min-Max Classifier (FMMC); and concludes with a summary of ART and ARTMAP applications.
Resumo:
An important component of this Ph.D. thesis was to determine the European consumers’ views on processed meats and bioactive compounds. Thus a survey gathered information form over 500 respondents and explored their perceptions on the healthiness and purchase-ability for both traditional and functional processed meats. This study found that the consumer was distrustful towards processed meat, especially high salt and fat content. Consumers were found to be very pro-bioactive compounds in yogurt style products but unsure of their feelings on the idea of them in meat based products, which is likely due to the lack of familiarity to these products. The work in this thesis also centred on the applied acceptable reduction of salt and fat in terms of consumer sensory analysis. The products chosen ranged in the degree of comminution, from a coarse beef patty to a more fine emulsion style breakfast sausage and frankfurter. A full factorial design was implemented which saw the production of twenty beef patties with varying concentrations of fat (30%, 40%, 50%, 60% w/w) and salt (0.5%, 0.75%, 1.0%, 1.25%, 1.5% w/w). Twenty eight sausage were also produced with varying concentrations of fat (22.5%, 27.5%, 32.5%, 37.5% w/w) and salt (0.8%, 1%, 1.2%, 1.4%, 1.6%, 2%, 2.4% w/w). Finally, twenty different frankfurters formulations were produced with varying concentrations of fat (10%, 15%, 20%, 25% w/w) and salt (1%, 1.5%, 2%, 2.5%, 3% w/w). From these products it was found that the most consumer acceptable beef patty was that containing 40% fat with a salt level of 1%. This is a 20% decrease in fat and a 50% decrease in salt levels when compared to commercial patty available in Ireland and the UK. For sausages, salt reduced products were rated by the consumers as paler in colour, more tender and with greater meat flavour than higher salt containing products. The sausages containing 1.4 % and 1.0 % salt were significantly (P<0.01) found to be more acceptable to consumers than other salt levels. Frankfurter salt levels below 1.5% were shown to have a negative effect on consumer acceptability, with 2.5% salt concentration being the most accepted (P<0.001) by consumers. Samples containing less fat and salt were found to be tougher, less juicy and had greater cooking losses. Thus salt perception is very important for consumer acceptability, but fat levels can be potentially reduced without significantly affecting overall acceptability. Overall it can be summarised that the consumer acceptability of salt and fat reduced processed meats depends very much on the product and generalisations cannot be assumed. The study of bio-actives in processed meat products found that the reduced salt/fat patties fortified with CoQ10 were rated as more acceptable than commercially available products for beef patties. The reduced fat and salt, as well as the CoQ10 fortified, sausages were found to compare quite well to their commercial counterparts for overall acceptability, whereas commercial frankfurters were found to be the more favoured in comparison to reduced fat and CoQ10 fortified Frankfurters.
Resumo:
The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.
Resumo:
Three bacterial isolates, SB13 (Acinetobacter sp.), SB14 (Arthrobacter sp.) and SB15 (Bacillus sp.), were previously isolated from the rhizosphere of sugar beet (Beta vulgaris ssp. vulgaris) plants and shown to increase hatch of potato cyst nematodes in vitro. In this study, the three isolates were assayed for rhizosphere competence. Each isolate was applied to seeds at each of four concentrations (105-108 CFU ml−1) and the inoculated seeds were planted in plastic microcosms containing coarse sand. All three isolates were shown to colonise the rhizosphere, although to differing degrees, with the higher inoculation densities providing significantly better colonisation. The isolates increased sugar beet root and shoot dry weight. Isolates SB14 and SB15 were analysed for their ability to induce in vivo hatch of Globodera pallida in non-sterile soil planted with sugar beet. After 4 and 6 weeks, both isolates had induced significantly greater percentage hatch compared to controls.
Resumo:
The objective of spatial downscaling strategies is to increase the information content of coarse datasets at smaller scales. In the case of quantitative precipitation estimation (QPE) for hydrological applications, the goal is to close the scale gap between the spatial resolution of coarse datasets (e.g., gridded satellite precipitation products at resolution L × L) and the high resolution (l × l; L»l) necessary to capture the spatial features that determine spatial variability of water flows and water stores in the landscape. In essence, the downscaling process consists of weaving subgrid-scale heterogeneity over a desired range of wavelengths in the original field. The defining question is, which properties, statistical and otherwise, of the target field (the known observable at the desired spatial resolution) should be matched, with the caveat that downscaling methods be as a general as possible and therefore ideally without case-specific constraints and/or calibration requirements? Here, the attention is focused on two simple fractal downscaling methods using iterated functions systems (IFS) and fractal Brownian surfaces (FBS) that meet this requirement. The two methods were applied to disaggregate spatially 27 summertime convective storms in the central United States during 2007 at three consecutive times (1800, 2100, and 0000 UTC, thus 81 fields overall) from the Tropical Rainfall Measuring Mission (TRMM) version 6 (V6) 3B42 precipitation product (~25-km grid spacing) to the same resolution as the NCEP stage IV products (~4-km grid spacing). Results from bilinear interpolation are used as the control. A fundamental distinction between IFS and FBS is that the latter implies a distribution of downscaled fields and thus an ensemble solution, whereas the former provides a single solution. The downscaling effectiveness is assessed using fractal measures (the spectral exponent β, fractal dimension D, Hurst coefficient H, and roughness amplitude R) and traditional operational scores statistics scores [false alarm rate (FR), probability of detection (PD), threat score (TS), and Heidke skill score (HSS)], as well as bias and the root-mean-square error (RMSE). The results show that both IFS and FBS fractal interpolation perform well with regard to operational skill scores, and they meet the additional requirement of generating structurally consistent fields. Furthermore, confidence intervals can be directly generated from the FBS ensemble. The results were used to diagnose errors relevant for hydrometeorological applications, in particular a spatial displacement with characteristic length of at least 50 km (2500 km2) in the location of peak rainfall intensities for the cases studied. © 2010 American Meteorological Society.
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
Quasi-Newton methods are applied to solve interface problems which arise from domain decomposition methods. These interface problems are usually sparse systems of linear or nonlinear equations. We are interested in applying these methods to systems of linear equations where we are not able or willing to calculate the Jacobian matrices as well as to systems of nonlinear equations resulting from nonlinear elliptic problems in the context of domain decomposition. Suitability for parallel implementation of these algorithms on coarse-grained parallel computers is discussed.
Resumo:
Temperature distributions involved in some metal-cutting or surface-milling processes may be obtained by solving a non-linear inverse problem. A two-level concept on parallelism is introduced to compute such temperature distribution. The primary level is based on a problem-partitioning concept driven by the nature and properties of the non-linear inverse problem. Such partitioning results to a coarse-grained parallel algorithm. A simplified 2-D metal-cutting process is used as an example to illustrate the concept. A secondary level exploitation of further parallel properties based on the concept of domain-data parallelism is explained and implemented using MPI. Some experiments were performed on a network of loosely coupled machines consist of SUN Sparc Classic workstations and a network of tightly coupled processors, namely the Origin 2000.
Resumo:
Numerical predictions produced by the SMARTFIRE fire field model are compared with experimental data. The predictions consist of gas temperatures at several locations within the compartment over a 60 min period. The test fire, produced by a burning wood crib attained a maximum heat release rate of approximately 11MW. The fire is intended to represent a nonspreading fire (i.e. single fuel source) in a moderately sized ventilated room. The experimental data formed part of the CIB Round Robin test series. Two simulations are produced, one involving a relatively coarse mesh and the other with a finer mesh. While the SMARTFIRE simulations made use of a simple volumetric heat release rate model, both simulations were found capable of reproducing the overall qualitative results. Both simulations tended to overpredict the measured temperatures. However, the finer mesh simulation was better able to reproduce the qualitative features of the experimental data. The maximum recorded experimental temperature (12141C after 39 min) was over-predicted in the fine mesh simulation by 12%. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper describes the architecture of the knowledge based system (KBS) component of Smartfire, a fire field modelling tool for use by members of the fire safety engineering community who are not expert in modelling techniques. The KBS captures the qualitative reasoning of an experienced modeller in the assessment of room geometries, so as to set up the important initial parameters of the problem. Fire modelling expertise is an example of geometric and spatial reasoning, which raises representational problems. The approach taken in this project is a qualitative representation of geometric room information based on Forbus’ concept of a metric diagram. This takes the form of a coarse grid, partitioning the domain in each of the three spatial dimensions. Inference over the representation is performed using a case-based reasoning (CBR) component. The CBR component stores example partitions with key set-up parameters; this paper concentrates on the key parameter of grid cell distribution.
Resumo:
We describe a heuristic method for drawing graphs which uses a multilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is repeated recursively to create a hierarchy of increasingly coarse graphs, G0, G1, …, GL. The coarsest graph, GL, is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for Gl is taken from its coarser and smaller child graph, Gl+1, and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on examples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.
Resumo:
Vacuum Arc Remelting (VAR) is the accepted method for producing homogeneous, fine microstructures that are free of inclusions required for rotating grade applications. However, as ingot sizes are increasing INCONEL 718 becomes increasingly susceptible to defects such as freckles, tree rings, and white spots increases for large diameter billets. Therefore, predictive models of these defects are required to allow optimization of process parameters. In this paper, a multiscale and multi-physics model is presented to predict the development of microstructures in the VAR ingot during solidification. At the microscale, a combined stochastic nucleation approach and finite difference solution of the solute diffusion is applied in the semi-solid zone of the VAR ingot. The micromodel is coupled with a solution of the macroscale heat transfer, fluid flow and electromagnetism in the VAR process through the temperature, pressure and fluid flow fields. The main objective of this study is to achieve a better understanding of the formation of the defects in VAR by quantifying the influence of VAR processing parameters on grain nucleation and dendrite growth. In particular, the effect of different ingot growth velocities on the microstructure formation was investigated. It was found that reducing the velocity produces significantly more coarse grains.
Resumo:
In this paper, thermal cycling reliability along with ANSYS analysis of the residual stress generated in heavy-gauge Al bond wires at different bonding temperatures is reported. 99.999% pure Al wires of 375 mum in diameter, were ultrasonically bonded to silicon dies coated with a 5mum thick Al metallisation at 25degC (room temperature), 100degC and 200degC, respectively (with the same bonding parameters). The wire bonded samples were then subjected to thermal cycling in air from -60degC to +150degC. The degradation rate of the wire bonds was assessed by means of bond shear test and via microstructural characterisation. Prior to thermal cycling, the shear strength of all of the wire bonds was approximately equal to the shear strength of pure aluminum and independent of bonding temperature. During thermal cycling, however, the shear strength of room temperature bonded samples was observed to decrease more rapidly (as compared to bonds formed at 100degC and 200degC) as a result of a high crack propagation rate across the bonding area. In addition, modification of the grain structure at the bonding interface was also observed with bonding temperature, leading to changes in the mechanical properties of the wire. The heat and pressure induced by the high temperature bonding is believed to promote grain recovery and recrystallisation, softening the wires through removal of the dislocations and plastic strain energy. Coarse grains formed at the bonding interface after bonding at elevated temperatures may also contribute to greater resistance for crack propagation, thus lowering the wire bond degradation rate
Resumo:
The solution process for diffusion problems usually involves the time development separately from the space solution. A finite difference algorithm in time requires a sequential time development in which all previous values must be determined prior to the current value. The Stehfest Laplace transform algorithm, however, allows time solutions without the knowledge of prior values. It is of interest to be able to develop a time-domain decomposition suitable for implementation in a parallel environment. One such possibility is to use the Laplace transform to develop coarse-grained solutions which act as the initial values for a set of fine-grained solutions. The independence of the Laplace transform solutions means that we do indeed have a time-domain decomposition process. Any suitable time solver can be used for the fine-grained solution. To illustrate the technique we shall use an Euler solver in time together with the dual reciprocity boundary element method for the space solution
Resumo:
Various models for predicting discharge rates have been developed over the last four decades by many research workers (notably Beverloo [1], Johanson [2], Brown [3], Carleton [4], Crewdson [5], Nedderman [6], Gu [7].). In many cases these models offer comparable approaches to the prediction of discharge rates of bulk particulates from storage equipment when solely gravity is acting to initiate flow (since they invariably consider the use of mass-flow design equipment). The models that have been developed consider a wide range of bulk particulates (coarse, incompressible, fine, cohesive) and most contemporary works have incorporated validation against test programmes. Research currently underway at The Wolfson Centre for Bulk Solids Handling Technology, University of Greenwich, has considered the relative performance of these models with respect to a range of bulk properties and with particular focus upon the flexibility of the models to cater for different geometrical factors for vessels.