820 resultados para Lanczos, Linear systems, Generalized cross validation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Published birthweight references in Australia do not fully take into account constitutional factors that influence birthweight and therefore may not provide an accurate reference to identify the infant with abnormal growth. Furthermore, studies in other regions that have derived adjusted (customised) birthweight references have applied untested assumptions in the statistical modelling. Aims: To validate the customised birthweight model and to produce a reference set of coefficients for estimating a customised birthweight that may be useful for maternity care in Australia and for future research. Methods: De-identified data were extracted from the clinical database for all births at the Mater Mother's Hospital, Brisbane, Australia, between January 1997 and June 2005. Births with missing data for the variables under study were excluded. In addition the following were excluded: multiple pregnancies, births less than 37 completed week's gestation, stillbirths, and major congenital abnormalities. Multivariate analysis was undertaken. A double cross-validation procedure was used to validate the model. Results: The study of 42 206 births demonstrated that, for statistical purposes, birthweight is normally distributed. Coefficients for the derivation of customised birthweight in an Australian population were developed and the statistical model is demonstrably robust. Conclusions: This study provides empirical data as to the robustness of the model to determine customised birthweight. Further research is required to define where normal physiology ends and pathology begins, and which segments of the population should be included in the construction of a customised birthweight standard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fifteen Miscanthus genotypes grown in five locations across Europe were analysed to investigate the influence of genetic and environmental factors on cell wall composition. Chemometric techniques combining near infrared reflectance spectroscopy (NIRS) and conventional chemical analyses were used to construct calibration models for determination of acid detergent lignin (ADL), acid detergent fibre (ADF), and neutral detergent fibre (NDF) from sample spectra. Results generated were subsequently converted to lignin, cellulose and hemicellulose content and used to assess the genetic and environmental variation in cell wall composition of Miscanthus and to identify genotypes which display quality traits suitable for exploitation in a range of energy conversion systems. The NIRS calibration models developed were found to predict concentrations with a good degree of accuracy based on the coefficient of determination (R2), standard error of calibration (SEC), and standard error of cross-validation (SECV) values. Across all sites mean lignin, cellulose and hemicellulose values in the winter harvest ranged from 76–115 g kg-1, 412–529 g kg-1, and 235–338 g kg-1 respectively. Overall, of the 15 genotypes Miscanthus x giganteus and Miscanthus sacchariflorus contained higher lignin and cellulose concentrations in the winter harvest. The degree of observed genotypic variation in cell wall composition indicates good potential for plant breeding and matching feedstocks to be optimised to different energy conversion processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper has been presented at the International Conference Pioneers of Bulgarian Mathematics, Dedicated to Nikola Obreshko ff and Lubomir Tschakaloff , Sofi a, July, 2006.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* This paper is partially supported by the National Science Fund of Bulgarian Ministry of Education and Science under contract № I–1401\2004 "Interactive Algorithms and Software Systems Supporting Multicriteria Decision Making".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The composition and abundance of algal pigments provide information on phytoplankton community characteristics such as photoacclimation, overall biomass and taxonomic composition. In particular, pigments play a major role in photoprotection and in the light-driven part of photosynthesis. Most phytoplankton pigments can be measured by high-performance liquid chromatography (HPLC) techniques applied to filtered water samples. This method, as well as other laboratory analyses, is time consuming and therefore limits the number of samples that can be processed in a given time. In order to receive information on phytoplankton pigment composition with a higher temporal and spatial resolution, we have developed a method to assess pigment concentrations from continuous optical measurements. The method applies an empirical orthogonal function (EOF) analysis to remote-sensing reflectance data derived from ship-based hyperspectral underwater radiometry and from multispectral satellite data (using the Medium Resolution Imaging Spectrometer - MERIS - Polymer product developed by Steinmetz et al., 2011, doi:10.1364/OE.19.009783) measured in the Atlantic Ocean. Subsequently we developed multiple linear regression models with measured (collocated) pigment concentrations as the response variable and EOF loadings as predictor variables. The model results show that surface concentrations of a suite of pigments and pigment groups can be well predicted from the ship-based reflectance measurements, even when only a multispectral resolution is chosen (i.e., eight bands, similar to those used by MERIS). Based on the MERIS reflectance data, concentrations of total and monovinyl chlorophyll a and the groups of photoprotective and photosynthetic carotenoids can be predicted with high quality. As a demonstration of the utility of the approach, the fitted model based on satellite reflectance data as input was applied to 1 month of MERIS Polymer data to predict the concentration of those pigment groups for the whole eastern tropical Atlantic area. Bootstrapping explorations of cross-validation error indicate that the method can produce reliable predictions with relatively small data sets (e.g., < 50 collocated values of reflectance and pigment concentration). The method allows for the derivation of time series from continuous reflectance data of various pigment groups at various regions, which can be used to study variability and change of phytoplankton composition and photophysiology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acknowledgements We acknowledge gratefully the support of BMBF, CoNDyNet, FK. 03SF0472A, of the EIT Climate-KIC project SWIPO and Nora Molkenthin for illustrating our illustration of the concept of survivability using penguins. We thank Martin Rohden for providing us with the UK high-voltage transmission grid topology and Yang Tang for very useful discussions. The publication of this article was funded by the Open Access Fund of the Leibniz Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic positron emission tomography (PET) imaging can be used to track the distribution of injected radio-labelled molecules over time in vivo. This is a powerful technique, which provides researchers and clinicians the opportunity to study the status of healthy and pathological tissue by examining how it processes substances of interest. Widely used tracers include 18F-uorodeoxyglucose, an analog of glucose, which is used as the radiotracer in over ninety percent of PET scans. This radiotracer provides a way of quantifying the distribution of glucose utilisation in vivo. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue function. As the residue represents the amount of tracer remaining in the tissue, this can be thought of as a survival function; these functions been examined in great detail by the statistics community. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as ow, ux and volume of distribution. This thesis presents a Markov chain formulation of blood tissue exchange and explores how this relates to established compartmental forms. A nonparametric approach to the estimation of the residue is examined and the improvement in this model relative to compartmental model is evaluated using simulations and cross-validation techniques. The reference distribution of the test statistics, generated in comparing the models, is also studied. We explore these models further with simulated studies and an FDG-PET dataset from subjects with gliomas, which has previously been analysed with compartmental modelling. We also consider the performance of a recently proposed mixture modelling technique in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to provide an efficient control design technique for discrete-time positive periodic systems. In particular, stability, positivity and periodic invariance of such systems are studied. Moreover, the concept of periodic invariance with respect to a collection of boxes is introduced and investigated with connection to stability. It is shown how such concept can be used for deriving a stabilizing state-feedback control that maintains the positivity of the closed-loop system and respects states and control signals constraints. In addition, all the proposed results can be efficiently solved in terms of linear programming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents the design of three high-performance successive-approximation-register (SAR) analog-to-digital converters (ADCs) using distinct digital background calibration techniques under the framework of a generalized code-domain linear equalizer. These digital calibration techniques effectively and efficiently remove the static mismatch errors in the analog-to-digital (A/D) conversion. They enable aggressive scaling of the capacitive digital-to-analog converter (DAC), which also serves as sampling capacitor, to the kT/C limit. As a result, outstanding conversion linearity, high signal-to-noise ratio (SNR), high conversion speed, robustness, superb energy efficiency, and minimal chip-area are accomplished simultaneously. The first design is a 12-bit 22.5/45-MS/s SAR ADC in 0.13-μm CMOS process. It employs a perturbation-based calibration based on the superposition property of linear systems to digitally correct the capacitor mismatch error in the weighted DAC. With 3.0-mW power dissipation at a 1.2-V power supply and a 22.5-MS/s sample rate, it achieves a 71.1-dB signal-to-noise-plus-distortion ratio (SNDR), and a 94.6-dB spurious free dynamic range (SFDR). At Nyquist frequency, the conversion figure of merit (FoM) is 50.8 fJ/conversion step, the best FoM up to date (2010) for 12-bit ADCs. The SAR ADC core occupies 0.06 mm2, while the estimated area the calibration circuits is 0.03 mm2. The second proposed digital calibration technique is a bit-wise-correlation-based digital calibration. It utilizes the statistical independence of an injected pseudo-random signal and the input signal to correct the DAC mismatch in SAR ADCs. This idea is experimentally verified in a 12-bit 37-MS/s SAR ADC fabricated in 65-nm CMOS implemented by Pingli Huang. This prototype chip achieves a 70.23-dB peak SNDR and an 81.02-dB peak SFDR, while occupying 0.12-mm2 silicon area and dissipating 9.14 mW from a 1.2-V supply with the synthesized digital calibration circuits included. The third work is an 8-bit, 600-MS/s, 10-way time-interleaved SAR ADC array fabricated in 0.13-μm CMOS process. This work employs an adaptive digital equalization approach to calibrate both intra-channel nonlinearities and inter-channel mismatch errors. The prototype chip achieves 47.4-dB SNDR, 63.6-dB SFDR, less than 0.30-LSB differential nonlinearity (DNL), and less than 0.23-LSB integral nonlinearity (INL). The ADC array occupies an active area of 1.35 mm2 and dissipates 30.3 mW, including synthesized digital calibration circuits and an on-chip dual-loop delay-locked loop (DLL) for clock generation and synchronization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction Prediction of soft tissue changes following orthognathic surgery has been frequently attempted in the past decades. It has gradually progressed from the classic “cut and paste” of photographs to the computer assisted 2D surgical prediction planning; and finally, comprehensive 3D surgical planning was introduced to help surgeons and patients to decide on the magnitude and direction of surgical movements as well as the type of surgery to be considered for the correction of facial dysmorphology. A wealth of experience was gained and numerous published literature is available which has augmented the knowledge of facial soft tissue behaviour and helped to improve the ability to closely simulate facial changes following orthognathic surgery. This was particularly noticed following the introduction of the three dimensional imaging into the medical research and clinical applications. Several approaches have been considered to mathematically predict soft tissue changes in three dimensions, following orthognathic surgery. The most common are the Finite element model and Mass tensor Model. These were developed into software packages which are currently used in clinical practice. In general, these methods produce an acceptable level of prediction accuracy of soft tissue changes following orthognathic surgery. Studies, however, have shown a limited prediction accuracy at specific regions of the face, in particular the areas around the lips. Aims The aim of this project is to conduct a comprehensive assessment of hard and soft tissue changes following orthognathic surgery and introduce a new method for prediction of facial soft tissue changes.   Methodology The study was carried out on the pre- and post-operative CBCT images of 100 patients who received their orthognathic surgery treatment at Glasgow dental hospital and school, Glasgow, UK. Three groups of patients were included in the analysis; patients who underwent Le Fort I maxillary advancement surgery; bilateral sagittal split mandibular advancement surgery or bimaxillary advancement surgery. A generic facial mesh was used to standardise the information obtained from individual patient’s facial image and Principal component analysis (PCA) was applied to interpolate the correlations between the skeletal surgical displacement and the resultant soft tissue changes. The identified relationship between hard tissue and soft tissue was then applied on a new set of preoperative 3D facial images and the predicted results were compared to the actual surgical changes measured from their post-operative 3D facial images. A set of validation studies was conducted. To include: • Comparison between voxel based registration and surface registration to analyse changes following orthognathic surgery. The results showed there was no statistically significant difference between the two methods. Voxel based registration, however, showed more reliability as it preserved the link between the soft tissue and skeletal structures of the face during the image registration process. Accordingly, voxel based registration was the method of choice for superimposition of the pre- and post-operative images. The result of this study was published in a refereed journal. • Direct DICOM slice landmarking; a novel technique to quantify the direction and magnitude of skeletal surgical movements. This method represents a new approach to quantify maxillary and mandibular surgical displacement in three dimensions. The technique includes measuring the distance of corresponding landmarks digitized directly on DICOM image slices in relation to three dimensional reference planes. The accuracy of the measurements was assessed against a set of “gold standard” measurements extracted from simulated model surgery. The results confirmed the accuracy of the method within 0.34mm. Therefore, the method was applied in this study. The results of this validation were published in a peer refereed journal. • The use of a generic mesh to assess soft tissue changes using stereophotogrammetry. The generic facial mesh played a major role in the soft tissue dense correspondence analysis. The conformed generic mesh represented the geometrical information of the individual’s facial mesh on which it was conformed (elastically deformed). Therefore, the accuracy of generic mesh conformation is essential to guarantee an accurate replica of the individual facial characteristics. The results showed an acceptable overall mean error of the conformation of generic mesh 1 mm. The results of this study were accepted for publication in peer refereed scientific journal. Skeletal tissue analysis was performed using the validated “Direct DICOM slices landmarking method” while soft tissue analysis was performed using Dense correspondence analysis. The analysis of soft tissue was novel and produced a comprehensive description of facial changes in response to orthognathic surgery. The results were accepted for publication in a refereed scientific Journal. The main soft tissue changes associated with Le Fort I were advancement at the midface region combined with widening of the paranasal, upper lip and nostrils. Minor changes were noticed at the tip of the nose and oral commissures. The main soft tissue changes associated with mandibular advancement surgery were advancement and downward displacement of the chin and lower lip regions, limited widening of the lower lip and slight reversion of the lower lip vermilion combined with minimal backward displacement of the upper lip were recorded. Minimal changes were observed on the oral commissures. The main soft tissue changes associated with bimaxillary advancement surgery were generalized advancement of the middle and lower thirds of the face combined with widening of the paranasal, upper lip and nostrils regions. In Le Fort I cases, the correlation between the changes of the facial soft tissue and the skeletal surgical movements was assessed using PCA. A statistical method known as ’Leave one out cross validation’ was applied on the 30 cases which had Le Fort I osteotomy surgical procedure to effectively utilize the data for the prediction algorithm. The prediction accuracy of soft tissue changes showed a mean error ranging between (0.0006mm±0.582) at the nose region to (-0.0316mm±2.1996) at the various facial regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As azeitonas de mesa são consumidas e apreciadas em todo o mundo e, embora a sua classificação comercial não seja legalmente exigida, o Conselho Oleícola Internacional sugere que seja regulamentada com base na avaliação sensorial por um painel de provadores. A implementação de tal requer o cumprimento de diretrizes estabelecidas pelo Conselho Oleícola Internacional, resultando numa tarefa complexa, demorada e cujas avaliações não estão isentas de subjetividade. Neste trabalho, pela primeira vez, uma língua eletrónica foi utilizada com o intuito de classificar azeitonas de mesa em categorias comerciais, estipuladas com base na presença e na mediana das intensidades do defeito organolético predominante percebido pelo painel de provadores. Modelos de discriminação lineares foram estabelecidos com base em subconjuntos de sinais potenciométricos de sensores da língua eletrónica, selecionados recorrendo ao algoritmo de arrefecimento simulado. Os desempenhos qualitativo de previsão dos modelos de classificação estabelecidos foram avaliados recorrendo à técnica de validação cruzada leave-one-out e à técnica de validação cruzada K-folds com repetição, que permite minimizar o risco de sobreajustamento, permitindo obter resultados mais realistas. O potencial desta abordagem qualitativa, baseada nos perfis eletroquímicos gerados pela língua eletrónica, foi satisfatoriamente demonstrado: (i) na classificação correta (sensibilidades ≥ 93%) de soluções padrão (ácido n-butírico, 2-mercaptoetanol e ácido ciclohexanocarboxílico) de acordo com o defeito sensorial que mimetizam (butírico, pútrido ou sapateira); (ii) na classificação correta (sensibilidades ≥ 93%) de amostras de referência de azeitonas e salmouras (presença de um defeito único intenso) de acordo com o tipo de defeito percebido (avinhado-avinagrado, butírico, mofo, pútrido ou sapateira), e selecionadas pelo painel de provadores; e, (iii) na classificação correta (sensibilidade ≥ 86%) de amostras de azeitonas de mesa com grande heterogeneidade, contendo um ou mais defeitos organoléticos percebidos pelo painel de provadores nas azeitona e/ou salmouras, de acordo com a sua categoria comercial (azeitona extra sem defeito, extra, 1ª escolha, 2ª escolha e azeitonas que não podem ser comercializadas como azeitonas de mesa). Por fim, a capacidade língua eletrónica em quantificar as medianas das intensidades dos atributos negativos detetados pelo painel nas azeitonas de mesa foi demonstrada recorrendo a modelos de regressão linear múltipla-algoritmo de arrefecimento simulado, com base em subconjuntos selecionados de sinais gerados pela língua eletrónica durante a análise potenciométrica das azeitonas e salmouras. O xii desempenho de previsão dos modelos quantitativos foi validado recorrendo às mesmas duas técnicas de validação cruzada. Os modelos estabelcidos para cada um dos 5 defeitos sensoriais presentes nas amostras de azeitona de mesa, permitiram quantificar satisfatoriamente as medianas das intensidades dos defeitos (R² ≥ 0,97). Assim, a qualidade satisfatória dos resultados qualitativos e quantitativos alcançados permite antever, pela primeira vez, uma possível aplicação prática das línguas eletrónicas como uma ferramenta de análise sensorial de defeitos em azeitonas de mesa, podendo ser usada como uma técnica rápida, económica e útil na avaliação organolética de atributos negativos, complementar à tradicional análise sensorial por um painel de provadores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current research on achievement goals acknowledges that students can manifest different goal patterns. This study aimed to adapt and validate a self-report scale to assess the goal orientations of Portuguese students. A total of 2675 (age range 9–24 years) Portuguese students completed the Goal Orientations Scale (GOS). Through a cross-validation procedure, confirmatory factor analysis and descriptive statistics supports the existence of four different goal orientations: task, self-enhancing, self-defeating and avoidance orientations. The reliability and the internal validity estimates confirm that the GOS is an adequate instrument in assessing student goal orientations.