917 resultados para Numerical Algorithms and Problems
Resumo:
The first phase in the sign, development and implementation of a comprehensive computational model of a copper stockpile leach process is presented. The model accounts for transport phenomena through the stockpile, reaction kinetics for the important mineral species, oxgen and bacterial effects on the leach reactions, plus heat, energy and acid balances for the overall leach process. The paper describes the formulation of the leach process model and its implementation in PHYSICA+, a computational fluid dynamic (CFD) software environment. The model draws on a number of phenomena to represent the competing physical and chemical features active in the process model. The phenomena are essentially represented by a three-phased (solid liquid gas) multi-component transport system; novel algorithms and procedures are required to solve the model equations, including a methodology for dealing with multiple chemical species with different reaction rates in ore represented by multiple particle size fractions. Some initial validation results and application simulations are shown to illustrate the potential of the model.
Resumo:
Soldering technologies continue to evolve to meet the demands of the continuous miniaturisation of electronic products, particularly in the area of solder paste formulations used in the reflow soldering of surface mount devices. Stencil printing continues to be a leading process used for the deposition of solder paste onto printed circuit boards (PCBs) in the volume production of electronic assemblies, despite problems in achieving a consistent print quality at an ultra-fine pitch. In order to eliminate these defects a good understanding of the processes involved in printing is important. Computational simulations may complement experimental print trials and paste characterisation studies, and provide an extra dimension to the understanding of the process. The characteristics and flow properties of solder pastes depend primarily on their chemical and physical composition and good material property data is essential for meaningful results to be obtained by computational simulation.This paper describes paste characterisation and computational simulation studies that have been undertaken through the collaboration of the School of Aeronautical, Mechanical and Manufacturing Engineering at Salford University and the Centre for Numerical Modelling and Process Analysis at the University of Greenwich. The rheological profile of two different paste formulations (lead and lead-free) for sub 100 micron flip-chip devices are tested and applied to computational simulations of their flow behaviour during the printing process.
Resumo:
The fabrication, assembly and testing of electronic packaging can involve complex interactions between physical phenomena such as temperature, fluid flow, electromagnetics, and stress. Numerical modelling and optimisation tools are key computer-aided-engineering technologies that aid design engineers. This paper discusses these technologies and there future developments.
Resumo:
A Concise Intro to Image Processing using C++ presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noising methods based on partial differential equations, and new image compression methods such as fractal image compression and wavelet compression. It includes elementary concepts of image processing and related fundamental tools with coding examples as well as exercises. With a particular emphasis on illustrating fractal and wavelet compression algorithms, the text covers image segmentation, object recognition, and morphology. An accompanying CD-ROM contains code for all algorithms.
Resumo:
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situRrs as input to the models, the performance of eleven semi-analytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
The Red Sea is a semi-enclosed tropical marine ecosystem that stretches from the Gulf of Suez and Gulf of Aqaba in the north, to the Gulf of Aden in the south. Despite its ecological and economic importance, its biological environment is relatively unexplored. Satellite ocean-colour estimates of chlorophyll concentration (an index of phytoplankton biomass) offer an observational platform to monitor the health of the Red Sea. However, little is known about the optical properties of the region. In this paper, we investigate the optical properties of the Red Sea in the context of satellite ocean-colour estimates of chlorophyll concentration. Making use of a new merged ocean-colour product, from the European Space Agency (ESA) Climate Change Initiative, and in situ data in the region, we test the performance of a series of ocean-colour chlorophyll algorithms. We find that standard algorithms systematically overestimate chlorophyll when compared with the in situ data. To investigate this bias we develop an ocean-colour model for the Red Sea, parameterised to data collected during the Tara Oceans expedition, that estimates remote-sensing reflectance as a function of chlorophyll concentration. We used the Red Sea model to tune the standard chlorophyll algorithms and the overestimation in chlorophyll originally observed was corrected. Results suggest that the overestimation was likely due to an excess of CDOM absorption per unit chlorophyll in the Red Sea when compared with average global conditions. However, we recognise that additional information is required to test the influence of other potential sources of the overestimation, such as aeolian dust, and we discuss uncertainties in the datasets used. We present a series of regional chlorophyll algorithms for the Red Sea, designed for a suite of ocean-colour sensors, that may be used for further testing.
Resumo:
The use of in situ measurements is essential in the validation and evaluation of the algorithms that provide coastal water quality data products from ocean colour satellite remote sensing. Over the past decade, various types of ocean colour algorithms have been developed to deal with the optical complexity of coastal waters. Yet there is a lack of a comprehensive intercomparison due to the availability of quality checked in situ databases. The CoastColour Round Robin (CCRR) project, funded by the European Space Agency (ESA), was designed to bring together three reference data sets using these to test algorithms and to assess their accuracy for retrieving water quality parameters. This paper provides a detailed description of these reference data sets, which include the Medium Resolution Imaging Spectrometer (MERIS) level 2 match-ups, in situ reflectance measurements, and synthetic data generated by a radiative transfer model (HydroLight). These data sets, representing mainly coastal waters, are available from doi:10.1594/PANGAEA.841950. The data sets mainly consist of 6484 marine reflectance (either multispectral or hyperspectral) associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: total suspended matter (TSM) and chlorophyll a (CHL) concentrations, and the absorption of coloured dissolved organic matter (CDOM). Inherent optical properties are also provided in the simulated data sets (5000 simulations) and from 3054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL and TSM as a function of reflectance, from the three data sets are compared. Match-up and in situ sites where deviations occur are identified. The distributions of the three reflectance data sets are also compared to the simulated and in situ reflectances used previously by the International Ocean Colour Coordinating Group (IOCCG, 2006) for algorithm testing, showing a clear extension of the CCRR data which covers more turbid waters.
Resumo:
Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.
Resumo:
In this paper we concentrate on the direct semi-blind spatial equalizer design for MIMO systems with Rayleigh fading channels. Our aim is to develop an algorithm which can outperform the classical training based method with the same training information used, and avoid the problems of low convergence speed and local minima due to pure blind methods. A general semi-blind cost function is first constructed which incorporates both the training information from the known data and some kind of higher order statistics (HOS) from the unknown sequence. Then, based on the developed cost function, we propose two semi-blind iterative and adaptive algorithms to find the desired spatial equalizer. To further improve the performance and convergence speed of the proposed adaptive method, we propose a technique to find the optimal choice of step size. Simulation results demonstrate the performance of the proposed algorithms and comparable schemes.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
The role of rhodopsin as a structural prototype for the study of the whole superfamily of G protein-coupled receptors (GPCRs) is reviewed in an historical perspective. Discovered at the end of the nineteenth century, fully sequenced since the early 1980s, and with direct three-dimensional information available since the 1990s, rhodopsin has served as a platform to gather indirect information on the structure of the other superfamily members. Recent breakthroughs have elicited the solution of the structures of additional receptors, namely the beta 1- and beta 2-adrenergic receptors and the A(2A) adenosine receptor, now providing an opportunity to gauge the accuracy of homology modeling and molecular docking techniques and to perfect the computational protocol. Notably, in coordination with the solution of the structure of the A(2A) adenosine receptor, the first "critical assessment of GPCR structural modeling and docking" has been organized, the results of which highlighted that the construction of accurate models, although challenging, is certainly achievable. The docking of the ligands and the scoring of the poses clearly emerged as the most difficult components. A further goal in the field is certainly to derive the structure of receptors in their signaling state, possibly in complex with agonists. These advances, coupled with the introduction of more sophisticated modeling algorithms and the increase in computer power, raise the expectation for a substantial boost of the robustness and accuracy of computer-aided drug discovery techniques in the coming years.