833 resultados para Learning from one Example


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Teixint Cultures és un projecte de recerca-acció comunitari que intenta promoure des de I'educació d'adults, I'aprenentatge de la llengua catalana a partir dels recursos Iingüístics que les mares d'origen africà tenen en la seva llengua familiar. El programa atén a mares d'origen immigrat amb fills menors de tres anys no escolaritzats al seu càrrec (la majoria de les quals té altres fills en edat escolar que assisteixen a la tarda a la biblioteca per fer els deures escolars). Tot el projecte esta vehiculat a partir dels contes infantils. De fet, intenta recuperar els contes de tradició oral provinents del continent africà per posteriorment elaborar materials educatius bilingües (en català i en les seves llengües) que es puguin utilitzar com a llibres educatius de consulta, tant per les biblioteques públiques com pels centres educatius. El programa es realitza de forma setmanal, al llarg de dues hores, a la biblioteca pública infantil d'en Massagran de Salt. Les mares expliquen contes i llegendes rellevants de la seva infància en la seva llengua i, a partir de diferents activitats de narració, traducció i dramatització de les històries basades en una metodologia de a dual-language D, elaboren textos escrits i narracions orals en català i en la seva llengua. Posteriorment són les mateixes participants les que editen els contes a I'ordinador i elaboren digitalment un llibre de contes bilingüe. Les sessions s'han registrat en àudio i vídeo i hem estudiat el procés d'implementació del programa i les estratègies multilingües utilitzades per les participants, els educadors i voluntaris. Els resultats evidencien el progrés de les dones en I'aprenentatge del català, la modificació de les seves actituds lingüístiques i de la seva autoimatge, així com I ‘impacte positiu del programa a la comunitat pel que fa al reconeixement dels recursos lingüístics i culturals de les minories ètniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A conceptually new approach is introduced for the decomposition of the molecular energy calculated at the density functional theory level of theory into sum of one- and two-atomic energy components, and is realized in the "fuzzy atoms" framework. (Fuzzy atoms mean that the three-dimensional physical space is divided into atomic regions having no sharp boundaries but exhibiting a continuous transition from one to another.) The new scheme uses the new concept of "bond order density" to calculate the diatomic exchange energy components and gives them unexpectedly close to the values calculated by the exact (Hartree-Fock) exchange for the same Kohn-Sham orbitals

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall. Methods: The whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D + t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D + t measured projection sequence and the corresponding forward projections of the deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation. Results: Evaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall. Conclusions: The authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time, although it has to be applied to a larger patient population prior to possible wide application to routine endovascular procedures. In particular, for the first time, this feasibility study has shown that in vivo cerebrovascular motion can be obtained intraprocedurally from a 3DRA acquisition. Results have also shown the potential of performing strain analysis using this imaging modality, thus making possible for the future modeling of biomechanical properties of the vascular wall.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our understanding of metabolism is undergoing a dramatic shift. Indeed, the efforts made towards elucidating the mechanisms controlling the major regulatory pathways are now being rewarded. At the molecular level, the crucial role of transcription factors is particularly well-illustrated by the link between alterations of their functions and the occurrence of major metabolic diseases. In addition, the possibility of manipulating the ligand-dependent activity of some of these transcription factors makes them attractive as therapeutic targets. The aim of this review is to summarize recent knowledge on the transcriptional control of metabolic homeostasis. We first review data on the transcriptional regulation of the intermediary metabolism, i.e., glucose, amino acid, lipid, and cholesterol metabolism. Then, we analyze how transcription factors integrate signals from various pathways to ensure homeostasis. One example of this coordination is the daily adaptation to the circadian fasting and feeding rhythm. This section also discusses the dysregulations causing the metabolic syndrome, which reveals the intricate nature of glucose and lipid metabolism and the role of the transcription factor PPARgamma in orchestrating this association. Finally, we discuss the molecular mechanisms underlying metabolic regulations, which provide new opportunities for treating complex metabolic disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the key emphases of these three essays is to provide practical managerial insight. However, good practical insight, can only be created by grounding it firmly on theoretical and empirical research. Practical experience-based understanding without theoretical grounding remains tacit and cannot be easily disseminated. Theoretical understanding without links to real life remains sterile. My studies aim to increase the understanding of how radical innovation could be generated at large established firms and how it can have an impact on business performance as most businesses pursue innovation with one prime objective: value creation. My studies focus on large established firms with sales revenue exceeding USD $ 1 billion. Usually large established firms cannot rely on informal ways of management, as these firms tend to be multinational businesses operating with subsidiaries, offices, or production facilities in more than one country. I. Internal and External Determinants of Corporate Venture Capital Investment The goal of this chapter is to focus on CVC as one of the mechanisms available for established firms to source new ideas that can be exploited. We explore the internal and external determinants under which established firms engage in CVC to source new knowledge through investment in startups. We attempt to make scholars and managers aware of the forces that influence CVC activity by providing findings and insights to facilitate the strategic management of CVC. There are research opportunities to further understand the CVC phenomenon. Why do companies engage in CVC? What motivates them to continue "playing the game" and keep their active CVC investment status. The study examines CVC investment activity, and the importance of understanding the influential factors that make a firm decide to engage in CVC. The main question is: How do established firms' CVC programs adapt to changing internal conditions and external environments. Adaptation typically involves learning from exploratory endeavors, which enable companies to transform the ways they compete (Guth & Ginsberg, 1990). Our study extends the current stream of research on CVC. It aims to contribute to the literature by providing an extensive comparison of internal and external determinants leading to CVC investment activity. To our knowledge, this is the first study to examine the influence of internal and external determinants on CVC activity throughout specific expansion and contraction periods determined by structural breaks occurring between 1985 to 2008. Our econometric analysis indicates a strong and significant positive association between CVC activity and R&D, cash flow availability and environmental financial market conditions, as well as a significant negative association between sales growth and the decision to engage into CVC. The analysis of this study reveals that CVC investment is highly volatile, as demonstrated by dramatic fluctuations in CVC investment activity over the past decades. When analyzing the overall cyclical CVC period from 1985 to 2008 the results of our study suggest that CVC activity has a pattern influenced by financial factors such as the level of R&D, free cash flow, lack of sales growth, and external conditions of the economy, with the NASDAQ price index as the most significant variable influencing CVC during this period. II. Contribution of CVC and its Interaction with R&D to Value Creation The second essay takes into account the demands of corporate executives and shareholders regarding business performance and value creation justifications for investments in innovation. Billions of dollars are invested in CVC and R&D. However there is little evidence that CVC and its interaction with R&D create value. Firms operating in dynamic business sectors seek to innovate to create the value demanded by changing market conditions, consumer preferences, and competitive offerings. Consequently, firms operating in such business sectors put a premium on finding new, sustainable and competitive value propositions. CVC and R&D can help them in this challenge. Dushnitsky and Lenox (2006) presented evidence that CVC investment is associated with value creation. However, studies have shown that the most innovative firms do not necessarily benefit from innovation. For instance Oyon (2007) indicated that between 1995 and 2005 the most innovative automotive companies did not obtain adequate rewards for shareholders. The interaction between CVC and R&D has generated much debate in the CVC literature. Some researchers see them as substitutes suggesting that firms have to choose between CVC and R&D (Hellmann, 2002), while others expect them to be complementary (Chesbrough & Tucci, 2004). This study explores the interaction that CVC and R&D have on value creation. This essay examines the impact of CVC and R&D on value creation over sixteen years across six business sectors and different geographical regions. Our findings suggest that the effect of CVC and its interaction with R&D on value creation is positive and significant. In dynamic business sectors technologies rapidly relinquish obsolete, consequently firms operating in such business sectors need to continuously develop new sources of value creation (Eisenhardt & Martin, 2000; Qualls, Olshavsky, & Michaels, 1981). We conclude that in order to impact value creation, firms operating in business sectors such as Engineering & Business Services, and Information Communication & Technology ought to consider CVC as a vital element of their innovation strategy. Moreover, regarding the CVC and R&D interaction effect, our findings suggest that R&D and CVC are complementary to value creation hence firms in certain business sectors can be better off supporting both R&D and CVC simultaneously to increase the probability of generating value creation. III. MCS and Organizational Structures for Radical Innovation Incremental innovation is necessary for continuous improvement but it does not provide a sustainable permanent source of competitiveness (Cooper, 2003). On the other hand, radical innovation pursuing new technologies and new market frontiers can generate new platforms for growth providing firms with competitive advantages and high economic margin rents (Duchesneau et al., 1979; Markides & Geroski, 2005; O'Connor & DeMartino, 2006; Utterback, 1994). Interestingly, not all companies distinguish between incremental and radical innovation, and more importantly firms that manage innovation through a one-sizefits- all process can almost guarantee a sub-optimization of certain systems and resources (Davila et al., 2006). Moreover, we conducted research on the utilization of MCS along with radical innovation and flexible organizational structures as these have been associated with firm growth (Cooper, 2003; Davila & Foster, 2005, 2007; Markides & Geroski, 2005; O'Connor & DeMartino, 2006). Davila et al. (2009) identified research opportunities for innovation management and provided a list of pending issues: How do companies manage the process of radical and incremental innovation? What are the performance measures companies use to manage radical ideas and how do they select them? The fundamental objective of this paper is to address the following research question: What are the processes, MCS, and organizational structures for generating radical innovation? Moreover, in recent years, research on innovation management has been conducted mainly at either the firm level (Birkinshaw, Hamel, & Mol, 2008a) or at the project level examining appropriate management techniques associated with high levels of uncertainty (Burgelman & Sayles, 1988; Dougherty & Heller, 1994; Jelinek & Schoonhoven, 1993; Kanter, North, Bernstein, & Williamson, 1990; Leifer et al., 2000). Therefore, we embarked on a novel process-related research framework to observe the process stages, MCS, and organizational structures that can generate radical innovation. This article is based on a case study at Alcan Engineered Products, a division of a multinational company provider of lightweight material solutions. Our observations suggest that incremental and radical innovation should be managed through different processes, MCS and organizational structures that ought to be activated and adapted contingent to the type of innovation that is being pursued (i.e. incremental or radical innovation). More importantly, we conclude that radical can be generated in a systematic way through enablers such as processes, MCS, and organizational structures. This is in line with the findings of Jelinek and Schoonhoven (1993) and Davila et al. (2006; 2007) who show that innovative firms have institutionalized mechanisms, arguing that radical innovation cannot occur in an organic environment where flexibility and consensus are the main managerial mechanisms. They rather argue that radical innovation requires a clear organizational structure and formal MCS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimum experimental designs depend on the design criterion, the model andthe design region. The talk will consider the design of experiments for regressionmodels in which there is a single response with the explanatory variables lying ina simplex. One example is experiments on various compositions of glass such asthose considered by Martin, Bursnall, and Stillman (2001).Because of the highly symmetric nature of the simplex, the class of models thatare of interest, typically Scheff´e polynomials (Scheff´e 1958) are rather differentfrom those of standard regression analysis. The optimum designs are also ratherdifferent, inheriting a high degree of symmetry from the models.In the talk I will hope to discuss a variety of modes for such experiments. ThenI will discuss constrained mixture experiments, when not all the simplex is availablefor experimentation. Other important aspects include mixture experimentswith extra non-mixture factors and the blocking of mixture experiments.Much of the material is in Chapter 16 of Atkinson, Donev, and Tobias (2007).If time and my research allows, I would hope to finish with a few comments ondesign when the responses, rather than the explanatory variables, lie in a simplex.ReferencesAtkinson, A. C., A. N. Donev, and R. D. Tobias (2007). Optimum ExperimentalDesigns, with SAS. Oxford: Oxford University Press.Martin, R. J., M. C. Bursnall, and E. C. Stillman (2001). Further results onoptimal and efficient designs for constrained mixture experiments. In A. C.Atkinson, B. Bogacka, and A. Zhigljavsky (Eds.), Optimal Design 2000,pp. 225–239. Dordrecht: Kluwer.Scheff´e, H. (1958). Experiments with mixtures. Journal of the Royal StatisticalSociety, Ser. B 20, 344–360.1

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the disability transition probabilities (as well as the mortalityprobabilities) due to concurrent factors to age such as income, gender and education. Althoughit is well known that ageing and socioeconomic status influence the probability ofcausing functional disorders, surprisingly little attention has been paid to the combined effectof those factors along the individuals' life and how this affects the transition from one degreeof disability to another. The assumption that tomorrow's disability state is only a functionof the today's state is very strong, since disability is a complex variable that depends onseveral other elements than time. This paper contributes into the field in two ways: (1) byattending the distinction between the initial disability level and the process that leads tohis course (2) by addressing whether and how education, age and income differentially affectthe disability transitions. Using a Markov chain discrete model and a survival analysis, weestimate the probability by year and individual characteristics that changes the state of disabilityand the duration that it takes its progression in each case. We find that people withan initial state of disability have a higher propensity to change and take less time to transitfrom different stages. Men do that more frequently than women. Education and incomehave negative effects on transition. Moreover, we consider the disability benefits associatedto those changes along different stages of disability and therefore we offer some clues onthe potential savings of preventive actions that may delay or avoid those transitions. Onpure cost considerations, preventive programs for improvement show higher benefits thanthose for preventing deterioration, and in general terms, those focussing individuals below65 should go first. Finally the trend of disability in Spain seems not to change among yearsand regional differences are not found.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Quantitative measures of degree of lumbar spinal stenosis (LSS) such as antero-posterior diameter of the canal or dural sac cross sectional area vary widely and do not correlate with clinical symptoms or results of surgical decompression. In an effort to improve quantification of stenosis we have developed a grading system based on the morphology of the dural sac and its contents as seen on T2 axial images. The grading comprises seven categories ranging form normal to the most severe stenosis and takes into account the ratio of rootlet/CSF content. Material and methods: Fifty T2 axial MRI images taken at disc level from twenty seven symptomatic lumbar spinal stenosis patients who underwent decompressive surgery were classified into seven categories by five observers and reclassified 2 weeks later by the same investigators. Intra- and inter-observer reliability of the classification were assessed using Cohen's and Fleiss' kappa statistics, respectively. Results: Generally, the morphology grading system itself was well adopted by the observers. Its success in application is strongly influenced by the identification of the dural sac. The average intraobserver Cohen's kappa was 0.53 ± 0.2. The inter-observer Fleiss' kappa was 0.38 ± 0.02 in the first rating and 0.3 ± 0.03 in the second rating repeated after two weeks. Discussion: In this attempt, the teaching of the observers was limited to an introduction to the general idea of the morphology grading system and one example MRI image per category. The identification of the dimension of the dural sac may be a difficult issue in absence of complete T1 T2 MRI image series as it was the case here. The similarity of the CSF to possibly present fat on T2 images was the main reason of mismatch in the assignment of the cases to a category. The Fleiss correlation factors of the five observers are fair and the proposed morphology grading system is promising.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To be diagnostically useful, structural MRI must reliably distinguish Alzheimer's disease (AD) from normal aging in individual scans. Recent advances in statistical learning theory have led to the application of support vector machines to MRI for detection of a variety of disease states. The aims of this study were to assess how successfully support vector machines assigned individual diagnoses and to determine whether data-sets combined from multiple scanners and different centres could be used to obtain effective classification of scans. We used linear support vector machines to classify the grey matter segment of T1-weighted MR scans from pathologically proven AD patients and cognitively normal elderly individuals obtained from two centres with different scanning equipment. Because the clinical diagnosis of mild AD is difficult we also tested the ability of support vector machines to differentiate control scans from patients without post-mortem confirmation. Finally we sought to use these methods to differentiate scans between patients suffering from AD from those with frontotemporal lobar degeneration. Up to 96% of pathologically verified AD patients were correctly classified using whole brain images. Data from different centres were successfully combined achieving comparable results from the separate analyses. Importantly, data from one centre could be used to train a support vector machine to accurately differentiate AD and normal ageing scans obtained from another centre with different subjects and different scanner equipment. Patients with mild, clinically probable AD and age/sex matched controls were correctly separated in 89% of cases which is compatible with published diagnosis rates in the best clinical centres. This method correctly assigned 89% of patients with post-mortem confirmed diagnosis of either AD or frontotemporal lobar degeneration to their respective group. Our study leads to three conclusions: Firstly, support vector machines successfully separate patients with AD from healthy aging subjects. Secondly, they perform well in the differential diagnosis of two different forms of dementia. Thirdly, the method is robust and can be generalized across different centres. This suggests an important role for computer based diagnostic image analysis for clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural signatures of humans' movement intention can be exploited by future neuroprosthesis. We propose a method for detecting self-paced upper limb movement intention from brain signals acquired with both invasive and noninvasive methods. In the first study with scalp electroencephalograph (EEG) signals from healthy controls, we report single trial detection of movement intention using movement related potentials (MRPs) in a frequency range between 0.1 to 1 Hz. Movement intention can be detected above chance level (p<0.05) on average 460 ms before the movement onset with low detection rate during the on-movement intention period. Using intracranial EEG (iEEG) from one epileptic subject, we detect movement intention as early as 1500 ms before movement onset with accuracy above 90% using electrodes implanted in the bilateral supplementary motor area (SMA). The coherent results obtained with non-invasive and invasive method and its generalization capabilities across different days of recording, strengthened the theory that self-paced movement intention can be detected before movement initiation for the advancement in robot-assisted neurorehabilitation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In situ UV-Iaser ablation Ar-40/(39) Ar geochronological and geochemical data, together with rock and mineral compositional data, have been determined from pseudotachylyte and surrounding mylonitic gneiss associated with the UHP whiteschists of the Dora Maira Massif, Italy. Several generations of fresh pseudotachylyte occur as irregular veins up to a few cur thick both parallel and at high angles to the foliation. Whole rock XRF data collected from representative lithologies of mylonitic gneiss are uniformly consistent with a mildly alkalic granitic protolith. Minimal compositional variation is observed between the pseudotachylyte and its surrounding mylonitic gneiss. The pseudotachylyte contains newly crystallized grains of biotite and K-feldspar in a matrix of glass with partially fused grains of quartz, zircon, apatite, and titanite. Electron microprobe analyses of the glass show significant compositional variation that is probably strongly influenced by micrometer-scale changes in mineralogy. UV-Iaser ablation ICP-MS traverses across the mylonitic gneiss-pseudotachylyte contact are consistent with cataclastic communition of REE carriers such as epidote, monazite, allanite, zircon, and apatite before melting as an efficient mechanism of REE homogenization in the pseudotachylyte. The 40Ar/39Ar data from one band of pseudotachylyte indicate formation at 20.1 +/- 0.5 Ma, when the mylonitic gneisses were already in a near surface position. The variable effects of top-to-the-west shear deformation within outcrops of the coesite-bearing unit are reflected in localized zones of protomylonite, cataclasite, ultracataclasite, and pseudotachylyte. Preservation of several generations of pseudotachylyte suggests that seismic events may have played a significant role in triggering late unroofing of the UHP rocks. It is speculated that deeper crustal seismic events potentially played a role in the unroofing of the UHP rocks at earlier stages in their exhumation history. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Step bunching develops in the epitaxy of SrRuO3 on vicinal SrTiO3(001) substrates. We have investigated the formation mechanisms and we show here that step bunching forms by lateral coalescence of wedgelike three-dimensional islands that are nucleated at substrate steps. After coalescence, wedgelike islands become wider and straighter with growth, forming a self-organized network of parallel step bunches with altitudes exceeding 30 unit cells, separated by atomically flat terraces. The formation mechanism of step bunching in SrRuO3, from nucleated islands, radically differs from one-dimensional models used to describe bunching in semiconducting materials. These results illustrate that growth phenomena of complex oxides can be dramatically different to those in semiconducting or metallic systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.