34 resultados para Classificació AMS::65 Numerical analysis::65D Numerical approximation and computational geometry
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
The literature on technology spillovers from trade and FDI is ambiguous in its findings. This may in part be because of the assumption in much of the work that trade and FDI flows are homogeneous in their determinants and thus in their effects. We develop a taxonomy of trade and FDI determinants based on R&D intensity and unit labour cost differentials, and test for the presence of spillovers from inward investment and imports on an extensive sample of UK manufacturing plants. We find that both trade and FDI have measurable spillover effects, but the sign and extent of these effects varies depending on the technological and factor cost differentials between the recipient and host economies. There is therefore an identifiable link between the determinants and effects of trade and FDI which the previous literature has not explored.
Resumo:
The literature on technology spillovers from trade and FDI is ambiguous in its findings. This may in part be because of the assumption in much of the work that trade and FDI flows are homogeneous in their determinants and thus in their effects. We develop a taxonomy of trade and FDI determinants based on R&D intensity and unit labour cost differentials, and test for the presence of spillovers from inward investment and imports on an extensive sample of UK manufacturing plants. We find that both trade and FDI have measurable spillover effects, but the sign and extent of these effects varies depending on the technological and factor cost differentials between the recipient and host economies. There is therefore an identifiable link between the determinants and effects of trade and FDI which the previous literature has not explored.
Resumo:
Purpose: In today's competitive scenario, effective supply chain management is increasingly dependent on third-party logistics (3PL) companies' capabilities and performance. The dissemination of information technology (IT) has contributed to change the supply chain role of 3PL companies and IT is considered an important element influencing the performance of modern logistics companies. Therefore, the purpose of this paper is to explore the relationship between IT and 3PLs' performance, assuming that logistics capabilities play a mediating role in this relationship. Design/methodology/approach: Empirical evidence based on a questionnaire survey conducted on a sample of logistics service companies operating in the Italian market was used to test a conceptual resource-based view (RBV) framework linking IT adoption, logistics capabilities and firm performance. Factor analysis and ordinary least square (OLS) regression analysis have been used to test hypotheses. The focus of the paper is multidisciplinary in nature; management of information systems, strategy, logistics and supply chain management approaches have been combined in the analysis. Findings: The results indicate strong relationships among data gathering technologies, transactional capabilities and firm performance, in terms of both efficiency and effectiveness. Moreover, a positive correlation between enterprise information technologies and 3PL financial performance has been found. Originality/value: The paper successfully uses the concept of logistics capabilities as mediating factor between IT adoption and firm performance. Objective measures have been proposed for IT adoption and logistics capabilities. Direct and indirect relationships among variables have been successfully tested. © Emerald Group Publishing Limited.
Resumo:
In this study, the central technique of in vitro culture has been used to further investigate whether LH/FSH-expressing, but clinically "functionless" pituitary adenomas are gonadotropinomas or whether their hormone secretion is due to transdifferentiation events. 664 "functionless" pituitary adenomas were examined for hormone secretion by in vitro culture and for hormone content by immunostaining. The results were correlated with the clinical findings. 40% of the tumours (n = 263) secreted at least one of the gonadotropins alone, 8% (n = 53) exhibited various patterns of anterior pituitary hormones, whilst the remaining 52% of tumours were not associated with any hormone. In the secretory tumours, immunostaining revealed only a few scattered hormone-containing cells (5 to 15%). Mild hyperprolactinaemia was observed in some cases, presumably because of pressure effects of the tumours. The majority of the patients suffered clear cut hypopituitarism (p < 0.05). Pre-operatively, gonadotropin hypersecretion was observed in 3 cases, but only one of these secreted hormones in culture. Interestingly, a higher proportion of tumours removed from patients with hypopituitarism showed secretory activity in vitro than those tumours removed from patients showing no hormonal dysfunction or hyperprolactinaemia. We conclude that the term "gonadotropinoma" to describe functionless pituitary tumours associated with LH and/or FSH secretion is a misnomer, because the presence of LH and/or FSH confirmed by in vitro methods in the present series is a result of only a few scattered cells. We suggest that primary pituitary tumour cells differentiate into a secretory type (transdifferentiation), possibly in response to altered serum hormone levels such as decreased steroids. Further work is required to identify the factors which trigger the altered cells' characteristics. © J. A. Barth Verlag in Georg Thieme Verlag KG.
Resumo:
Population measures for genetic programs are defined and analysed in an attempt to better understand the behaviour of genetic programming. Some measures are simple, but do not provide sufficient insight. The more meaningful ones are complex and take extra computation time. Here we present a unified view on the computation of population measures through an information hypertree (iTree). The iTree allows for a unified and efficient calculation of population measures via a basic tree traversal. © Springer-Verlag 2004.
Resumo:
Signal transduction pathways control cell fate, survival and function. They are organized as intricate biochemical networks which enable biochemical protein activities, crosstalk and subcellular localization to be integrated and tuned to produce highly specific biological responses in a robust and reproducible manner. Post translational Modifications (PTMs) play major roles in regulating these processes through a wide variety of mechanisms that include changes in protein activities, interactions, and subcellular localizations. Determining and analyzing PTMs poses enormous challenges. Recent progress in mass spectrometry (MS) based proteomics have enhanced our capability to map and identify many PTMs. Here we review the current state of proteomic PTM analysis relevant for signal transduction research, focusing on two areas: phosphorylation, which is well established as a widespread key regulator of signal transduction; and oxidative modifications, which from being primarily viewed as protein damage now start to emerge as important regulatory mechanisms.
Resumo:
PURPOSE: To determine by wavefront analysis the difference between eyes considered normal, eyes diagnosed with keratoconus, and eyes that have undergone penetrating keratoplasty METHODS: The Nidek OPD-Scan wavefront aberrometer was used to measure ocular aberrations out to the sixth Zernike order. One hundred and thirty eyes that were free of ocular pathology, 41 eyes diagnosed with keratoconus, and 8 eyes that had undergone penetrating keratoplasty were compared for differences in root mean square value. Three and five millimeter root mean square values of the refractive power aberrometry maps of the three classes of eyes were compared. Radially symmetric and irregular higher order aberration values were compared for differences in magnitude. RESULTS: Root mean square values were lower in eyes free of ocular pathology compared to eyes with keratoconus and eyes that had undergone penetrating keratoplasty. The aberrations were larger with the 5-mm pupil. Coma and spherical aberration values were lower in normal eyes. CONCLUSION: Wavefront aberrometry of normal, pathological, and eyes after surgery may help to explain the visual distortions encountered by patients. The ability to measure highly aberrated eyes allows an objective assessment of the optical consequences of ocular pathology and surgery. The Nidek OPD-Scan can be used in areas other than refractive surgery.
Resumo:
The aim of this review was to quantify the global variation in childhood myopia prevalence over time taking account of demographic and study design factors. A systematic review identified population-based surveys with estimates of childhood myopia prevalence published by February 2015. Multilevel binomial logistic regression of log odds of myopia was used to examine the association with age, gender, urban versus rural setting and survey year, among populations of different ethnic origins, adjusting for study design factors. 143 published articles (42 countries, 374 349 subjects aged 1- 18 years, 74 847 myopia cases) were included. Increase in myopia prevalence with age varied by ethnicity. East Asians showed the highest prevalence, reaching 69% (95% credible intervals (CrI) 61% to 77%) at 15 years of age (86% among Singaporean-Chinese). Blacks in Africa had the lowest prevalence; 5.5% at 15 years (95% CrI 3% to 9%). Time trends in myopia prevalence over the last decade were small in whites, increased by 23% in East Asians, with a weaker increase among South Asians. Children from urban environments have 2.6 times the odds of myopia compared with those from rural environments. In whites and East Asians sex differences emerge at about 9 years of age; by late adolescence girls are twice as likely as boys to be myopic. Marked ethnic differences in age-specific prevalence of myopia exist. Rapid increases in myopia prevalence over time, particularly in East Asians, combined with a universally higher risk of myopia in urban settings, suggest that environmental factors play an important role in myopia development, which may offer scope for prevention.
Resumo:
Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.
Resumo:
In this paper we present the design and analysis of an intonation model for text-to-speech (TTS) synthesis applications using a combination of Relational Tree (RT) and Fuzzy Logic (FL) technologies. The model is demonstrated using the Standard Yorùbá (SY) language. In the proposed intonation model, phonological information extracted from text is converted into an RT. RT is a sophisticated data structure that represents the peaks and valleys as well as the spatial structure of a waveform symbolically in the form of trees. An initial approximation to the RT, called Skeletal Tree (ST), is first generated algorithmically. The exact numerical values of the peaks and valleys on the ST is then computed using FL. Quantitative analysis of the result gives RMSE of 0.56 and 0.71 for peak and valley respectively. Mean Opinion Scores (MOS) of 9.5 and 6.8, on a scale of 1 - -10, was obtained for intelligibility and naturalness respectively.
Resumo:
This work is undertaken in the attempt to understand the processes at work at the cutting edge of the twist drill. Extensive drill life testing performed by the University has reinforced a survey of previously published information. This work demonstrated that there are two specific aspects of drilling which have not previously been explained comprehensively. The first concerns the interrelating of process data between differing drilling situations, There is no method currently available which allows the cutting geometry of drilling to be defined numerically so that such comparisons, where made, are purely subjective. Section one examines this problem by taking as an example a 4.5mm drill suitable for use with aluminium. This drill is examined using a prototype solid modelling program to explore how the required numerical information may be generated. The second aspect is the analysis of drill stiffness. What aspects of drill stiffness provide the very great difference in performance between short flute length, medium flute length and long flute length drills? These differences exist between drills of identical point geometry and the practical superiority of short drills has been known to shop floor drilling operatives since drilling was first introduced. This problem has been dismissed repeatedly as over complicated but section two provides a first approximation and shows that at least for smaller drills of 4. 5mm the effects are highly significant. Once the cutting action of the twist drill is defined geometrically there is a huge body of machinability data that becomes applicable to the drilling process. Work remains to interpret the very high inclination angles of the drill cutting process in terms of cutting forces and tool wear but aspects of drill design may already be looked at in new ways with the prospect of a more analytical approach rather than the present mix of experience and trial and error. Other problems are specific to the twist drill, such as the behaviour of the chips in the flute. It is now possible to predict the initial direction of chip flow leaving the drill cutting edge. For the future the parameters of further chip behaviour may also be explored within this geometric model.
Resumo:
The main advantage of Data Envelopment Analysis (DEA) is that it does not require any priori weights for inputs and outputs and allows individual DMUs to evaluate their efficiencies with the input and output weights that are only most favorable weights for calculating their efficiency. It can be argued that if DMUs are experiencing similar circumstances, then the pricing of inputs and outputs should apply uniformly across all DMUs. That is using of different weights for DMUs makes their efficiencies unable to be compared and not possible to rank them on the same basis. This is a significant drawback of DEA; however literature observed many solutions including the use of common set of weights (CSW). Besides, the conventional DEA methods require accurate measurement of both the inputs and outputs; however, crisp input and output data may not relevant be available in real world applications. This paper develops a new model for the calculation of CSW in fuzzy environments using fuzzy DEA. Further, a numerical example is used to show the validity and efficacy of the proposed model and to compare the results with previous models available in the literature.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.