830 resultados para Gradient-based approaches
Resumo:
The purpose of this thesis is to analyse activity-based costing (ABC) and possible modified versions ofit in engineering design context. The design engineers need cost information attheir decision-making level and the cost information should also have a strong future orientation. These demands are high because traditional management accounting has concentrated on the direct actual costs of the products. However, cost accounting has progressed as ABC was introduced late 1980s and adopted widely bycompanies in the 1990s. The ABC has been a success, but it has gained also criticism. In some cases the ambitious ABC systems have become too complex to build,use and update. This study can be called an action-oriented case study with some normative features. In this thesis theoretical concepts are assessed and allowed to unfold gradually through interaction with data from three cases. The theoretical starting points are ABC and theory of engineering design process (chapter2). Concepts and research results from these theoretical approaches are summarized in two hypotheses (chapter 2.3). The hypotheses are analysed with two cases (chapter 3). After the two case analyses, the ABC part is extended to cover alsoother modern cost accounting methods, e.g. process costing and feature costing (chapter 4.1). The ideas from this second theoretical part are operationalized with the third case (chapter 4.2). The knowledge from the theory and three cases is summarized in the created framework (chapter 4.3). With the created frameworkit is possible to analyse ABC and its modifications in the engineering design context. The framework collects the factors that guide the choice of the costing method to be used in engineering design. It also illuminates the contents of various ABC-related costing methods. However, the framework needs to be further tested. On the basis of the three cases it can be said that ABC should be used cautiously when formulating cost information for engineering design. It is suitable when the manufacturing can be considered simple, or when the design engineers are not cost conscious, and in the beginning of the design process when doing adaptive or variant design. If the design engineers need cost information for the embodiment or detailed design, or if manufacturing can be considered complex, or when design engineers are cost conscious, the ABC has to be always evaluated critically.
Resumo:
Pulsewidth-modulated (PWM) rectifier technology is increasingly used in industrial applications like variable-speed motor drives, since it offers several desired features such as sinusoidal input currents, controllable power factor, bidirectional power flow and high quality DC output voltage. To achieve these features,however, an effective control system with fast and accurate current and DC voltage responses is required. From various control strategies proposed to meet these control objectives, in most cases the commonly known principle of the synchronous-frame current vector control along with some space-vector PWM scheme have been applied. Recently, however, new control approaches analogous to the well-established direct torque control (DTC) method for electrical machines have also emerged to implement a high-performance PWM rectifier. In this thesis the concepts of classical synchronous-frame current control and DTC-based PWM rectifier control are combined and a new converter-flux-based current control (CFCC) scheme is introduced. To achieve sufficient dynamic performance and to ensure a stable operation, the proposed control system is thoroughly analysed and simple rules for the controller design are suggested. Special attention is paid to the estimationof the converter flux, which is the key element of converter-flux-based control. Discrete-time implementation is also discussed. Line-voltage-sensorless reactive reactive power control methods for the L- and LCL-type line filters are presented. For the L-filter an open-loop control law for the d-axis current referenceis proposed. In the case of the LCL-filter the combined open-loop control and feedback control is proposed. The influence of the erroneous filter parameter estimates on the accuracy of the developed control schemes is also discussed. A newzero vector selection rule for suppressing the zero-sequence current in parallel-connected PWM rectifiers is proposed. With this method a truly standalone and independent control of the converter units is allowed and traditional transformer isolation and synchronised-control-based solutions are avoided. The implementation requires only one additional current sensor. The proposed schemes are evaluated by the simulations and laboratory experiments. A satisfactory performance and good agreement between the theory and practice are demonstrated.
Resumo:
Finland has large forest fuel resources. However, the use of forest fuels for energy production has been low, except for small-scale use in heating. According to national action plans and programs related to wood energy promotion, the utilization of such resources will be multiplied over the next few years. The most significant part of this growth will be based on the utilization of forest fuels, produced from logging residues of regeneration fellings, in industrial and municipal power and heating plants. Availability of logging residues was analyzed by means of resource and demand approaches in order to identify the most suitable regions with focus on increasing the forest fuel usage. The analysis included availability and supply cost comparisons between power plant sites and resource allocation in a least cost manner, and between a predefined power plant structure under demand and supply constraints. Spatial analysis of worksite factors and regional geographies were carried out using the GIS-model environment via geoprocessing and cartographic modeling tools. According to the results of analyses, the cost competitiveness of forest fuel supply should be improved in order to achieve the designed objectives in the near future. Availability and supply costs of forest fuels varied spatially and were very sensitive to worksite factors and transport distances. According to the site-specific analysis the supply potential between differentlocations can be multifold. However, due to technical and economical reasons ofthe fuel supply and dense power plant infrastructure, the supply potential is limited at plant level. Therefore, the potential and supply cost calculations aredepending on site-specific matters, where regional characteristics of resourcesand infrastructure should be taken into consideration, for example by using a GIS-modeling approach constructed in this study.
Resumo:
Huntington's disease is a rare neurodegenerative disease caused by a pathologic CAG expansion in the exon 1 of the huntingtin (HTT) gene. Aggregation and abnormal function of the mutant HTT (mHTT) cause motor, cognitive and psychiatric symptoms in patients, which lead to death in 15-20 years. Currently, there is no treatment for HD. Experimental approaches based on drug, cell or gene therapy are developed and reach progressively to the clinic. Among them, mHTT silencing using small non-coding nucleic acids display important physiopathological benefit in HD experimental models.
Resumo:
Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so that they can be used for anatomical connection studies and more specifically for tractography. We applied text-mining models to three structures: two well-studied structures, since validated deep brain stimulation targets, the internal globus pallidus and the subthalamic nucleus and, the nucleus accumbens, an exploratory target for treating psychiatric disorders. We performed a systematic review of the literature to document the projections of the three selected structures and compared it with the targets proposed by text-mining models, both in rat and primate (including human). We ran probabilistic tractography on the nucleus accumbens and compared the output with the results of the text-mining models and literature review. Overall, text-mining the literature could find three times as many targets as two man-weeks of curation could. The overall efficiency of the text-mining against literature review in our study was 98% recall (at 36% precision), meaning that over all the targets for the three selected seeds, only one target has been missed by text-mining. We demonstrate that connectivity for a structure of interest can be extracted from a very large amount of publications and abstracts. We believe this tool will be useful in helping the neuroscience community to facilitate connectivity studies of particular brain regions. The text mining tools used for the study are part of the HBP Neuroinformatics Platform, publicly available at http://connectivity-brainer.rhcloud.com/.
Resumo:
PURPOSE: All methods presented to date to map both conductivity and permittivity rely on multiple acquisitions to compute quantitatively the magnitude of radiofrequency transmit fields, B1+. In this work, we propose a method to compute both conductivity and permittivity based solely on relative receive coil sensitivities ( B1-) that can be obtained in one single measurement without the need to neither explicitly perform transmit/receive phase separation nor make assumptions regarding those phases. THEORY AND METHODS: To demonstrate the validity and the noise sensitivity of our method we used electromagnetic finite differences simulations of a 16-channel transceiver array. To experimentally validate our methodology at 7 Tesla, multi compartment phantom data was acquired using a standard 32-channel receive coil system and two-dimensional (2D) and 3D gradient echo acquisition. The reconstructed electric properties were correlated to those measured using dielectric probes. RESULTS: The method was demonstrated both in simulations and in phantom data with correlations to both the modeled and bench measurements being close to identity. The noise properties were modeled and understood. CONCLUSION: The proposed methodology allows to quantitatively determine the electrical properties of a sample using any MR contrast, with the only constraint being the need to have 4 or more receive coils and high SNR. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
This thesis is composed of three main parts. The first consists of a state of the art of the different notions that are significant to understand the elements surrounding art authentication in general, and of signatures in particular, and that the author deemed them necessary to fully grasp the microcosm that makes up this particular market. Individuals with a solid knowledge of the art and expertise area, and that are particularly interested in the present study are advised to advance directly to the fourth Chapter. The expertise of the signature, it's reliability, and the factors impacting the expert's conclusions are brought forward. The final aim of the state of the art is to offer a general list of recommendations based on an exhaustive review of the current literature and given in light of all of the exposed issues. These guidelines are specifically formulated for the expertise of signatures on paintings, but can also be applied to wider themes in the area of signature examination. The second part of this thesis covers the experimental stages of the research. It consists of the method developed to authenticate painted signatures on works of art. This method is articulated around several main objectives: defining measurable features on painted signatures and defining their relevance in order to establish the separation capacities between groups of authentic and simulated signatures. For the first time, numerical analyses of painted signatures have been obtained and are used to attribute their authorship to given artists. An in-depth discussion of the developed method constitutes the third and final part of this study. It evaluates the opportunities and constraints when applied by signature and handwriting experts in forensic science. A brief summary covering each chapter allows a rapid overview of the study and summarizes the aims and main themes of each chapter. These outlines presented below summarize the aims and main themes addressed in each chapter. Part I - Theory Chapter 1 exposes legal aspects surrounding the authentication of works of art by art experts. The definition of what is legally authentic, the quality and types of the experts that can express an opinion concerning the authorship of a specific painting, and standard deontological rules are addressed. The practices applied in Switzerland will be specifically dealt with. Chapter 2 presents an overview of the different scientific analyses that can be carried out on paintings (from the canvas to the top coat). Scientific examinations of works of art have become more common, as more and more museums equip themselves with laboratories, thus an understanding of their role in the art authentication process is vital. The added value that a signature expertise can have in comparison to other scientific techniques is also addressed. Chapter 3 provides a historical overview of the signature on paintings throughout the ages, in order to offer the reader an understanding of the origin of the signature on works of art and its evolution through time. An explanation is given on the transitions that the signature went through from the 15th century on and how it progressively took on its widely known modern form. Both this chapter and chapter 2 are presented to show the reader the rich sources of information that can be provided to describe a painting, and how the signature is one of these sources. Chapter 4 focuses on the different hypotheses the FHE must keep in mind when examining a painted signature, since a number of scenarios can be encountered when dealing with signatures on works of art. The different forms of signatures, as well as the variables that may have an influence on the painted signatures, are also presented. Finally, the current state of knowledge of the examination procedure of signatures in forensic science in general, and in particular for painted signatures, is exposed. The state of the art of the assessment of the authorship of signatures on paintings is established and discussed in light of the theoretical facets mentioned previously. Chapter 5 considers key elements that can have an impact on the FHE during his or her2 examinations. This includes a discussion on elements such as the skill, confidence and competence of an expert, as well as the potential bias effects he might encounter. A better understanding of elements surrounding handwriting examinations, to, in turn, better communicate results and conclusions to an audience, is also undertaken. Chapter 6 reviews the judicial acceptance of signature analysis in Courts and closes the state of the art section of this thesis. This chapter brings forward the current issues pertaining to the appreciation of this expertise by the non- forensic community, and will discuss the increasing number of claims of the unscientific nature of signature authentication. The necessity to aim for more scientific, comprehensive and transparent authentication methods will be discussed. The theoretical part of this thesis is concluded by a series of general recommendations for forensic handwriting examiners in forensic science, specifically for the expertise of signatures on paintings. These recommendations stem from the exhaustive review of the literature and the issues exposed from this review and can also be applied to the traditional examination of signatures (on paper). Part II - Experimental part Chapter 7 describes and defines the sampling, extraction and analysis phases of the research. The sampling stage of artists' signatures and their respective simulations are presented, followed by the steps that were undertaken to extract and determine sets of characteristics, specific to each artist, that describe their signatures. The method is based on a study of five artists and a group of individuals acting as forgers for the sake of this study. Finally, the analysis procedure of these characteristics to assess of the strength of evidence, and based on a Bayesian reasoning process, is presented. Chapter 8 outlines the results concerning both the artist and simulation corpuses after their optical observation, followed by the results of the analysis phase of the research. The feature selection process and the likelihood ratio evaluation are the main themes that are addressed. The discrimination power between both corpuses is illustrated through multivariate analysis. Part III - Discussion Chapter 9 discusses the materials, the methods, and the obtained results of the research. The opportunities, but also constraints and limits, of the developed method are exposed. Future works that can be carried out subsequent to the results of the study are also presented. Chapter 10, the last chapter of this thesis, proposes a strategy to incorporate the model developed in the last chapters into the traditional signature expertise procedures. Thus, the strength of this expertise is discussed in conjunction with the traditional conclusions reached by forensic handwriting examiners in forensic science. Finally, this chapter summarizes and advocates a list of formal recommendations for good practices for handwriting examiners. In conclusion, the research highlights the interdisciplinary aspect of signature examination of signatures on paintings. The current state of knowledge of the judicial quality of art experts, along with the scientific and historical analysis of paintings and signatures, are overviewed to give the reader a feel of the different factors that have an impact on this particular subject. The temperamental acceptance of forensic signature analysis in court, also presented in the state of the art, explicitly demonstrates the necessity of a better recognition of signature expertise by courts of law. This general acceptance, however, can only be achieved by producing high quality results through a well-defined examination process. This research offers an original approach to attribute a painted signature to a certain artist: for the first time, a probabilistic model used to measure the discriminative potential between authentic and simulated painted signatures is studied. The opportunities and limits that lie within this method of scientifically establishing the authorship of signatures on works of art are thus presented. In addition, the second key contribution of this work proposes a procedure to combine the developed method into that used traditionally signature experts in forensic science. Such an implementation into the holistic traditional signature examination casework is a large step providing the forensic, judicial and art communities with a solid-based reasoning framework for the examination of signatures on paintings. The framework and preliminary results associated with this research have been published (Montani, 2009a) and presented at international forensic science conferences (Montani, 2009b; Montani, 2012).
Resumo:
BACKGROUND: Recent neuroimaging studies suggest that value-based decision-making may rely on mechanisms of evidence accumulation. However no studies have explicitly investigated the time when single decisions are taken based on such an accumulation process. NEW METHOD: Here, we outline a novel electroencephalography (EEG) decoding technique which is based on accumulating the probability of appearance of prototypical voltage topographies and can be used for predicting subjects' decisions. We use this approach for studying the time-course of single decisions, during a task where subjects were asked to compare reward vs. loss points for accepting or rejecting offers. RESULTS: We show that based on this new method, we can accurately decode decisions for the majority of the subjects. The typical time-period for accurate decoding was modulated by task difficulty on a trial-by-trial basis. Typical latencies of when decisions are made were detected at ∼500ms for 'easy' vs. ∼700ms for 'hard' decisions, well before subjects' response (∼340ms). Importantly, this decision time correlated with the drift rates of a diffusion model, evaluated independently at the behavioral level. COMPARISON WITH EXISTING METHOD(S): We compare the performance of our algorithm with logistic regression and support vector machine and show that we obtain significant results for a higher number of subjects than with these two approaches. We also carry out analyses at the average event-related potential level, for comparison with previous studies on decision-making. CONCLUSIONS: We present a novel approach for studying the timing of value-based decision-making, by accumulating patterns of topographic EEG activity at single-trial level.
Resumo:
The design of therapeutic cancer vaccines is aimed at inducing high numbers and potent T cells that are able to target and eradicate malignant cells. This calls for close collaboration between cells of the innate immune system, in particular dendritic cells (DCs), and cells of the adaptive immune system, notably CD4+ helper T cells and CD8+ cytotoxic T cells. Therapeutic vaccines are aided by adjuvants, which can be, for example, Toll¬like Receptor agonists or agents promoting the cytosolic delivery of antigens, among others. Vaccination with long synthetic peptides (LSPs) is a promising strategy, as the requirement for their intracellular processing will mainly target LSPs to professional antigen presenting cells (APCs), hence avoiding the immune tolerance elicited by the presentation of antigens by non-professional APCs. The unique property of antigen cross-processing and cross-presentation activity by DCs plays an important role in eliciting antitumour immunity given that antigens from engulfed dead tumour cells require this distinct biological process to be processed and presented to CD8+T cells in the context of MHC class I molecules. DCs expressing the XCR1 chemokine receptor are characterised by their superior capability of antigen cross- presentation and priming of highly cytotoxic T lymphocyte (CTL) responses. Recently, XCR1 was found to be also expressed in tissue-residents DCs in humans, with a simitar transcriptional profile to that of cross- presenting murine DCs. This shed light into the value of harnessing this subtype of XCR1+ cross-presenting DCs for therapeutic vaccination of cancer. In this study, we explored ways of adjuvanting and optimising LSP therapeutic vaccinations by the use, in Part I, of the XCLl chemokine that selectively binds to the XCR1 receptor, as a mean to target antigen to the cross-presenting XCR1+ DCs; and in Part II, by the inclusion of Q.S21 in the LSP vaccine formulation, a saponin with adjuvant activity, as well as the ability to promote cytosolic delivery of LSP antigens due to its intrinsic cell membrane insertion activity. In Part I, we designed and produced XCLl-(OVA LSP)-Fc fusion proteins, and showed that their binding to XCR1+ DCs mediate their chemoattraction. In addition, therapeutic vaccinations adjuvanted with XCLl-(OVA LSP)-Fc fusion proteins significantly enhanced the OVA-specific CD8+ T cell response, and led to complete tumour regression in the EL4-OVA model, and significant control of tumour growth in the B16.0VA tumour model. With the aim to optimise the co-delivery of LSP antigen and XCLl to skin-draining lymph nodes we also tested immunisations using nanoparticle (NP)-conjugated OVA LSP in the presence or absence of XCLl chemokine. The NP-mediated delivery of LSP potentiated the CTL response seen in the blood of vaccinated mice, and NP-OVA LSP vaccine in the presence of XCLl led to higher blood frequencies of OVA-specific memory-precursor effector cells. Nevertheless, in these settings, the addition XCLl to NP-OVA LSP vaccine formulation did not increase its antitumour therapeutic effect. In the Part II, we assessed in HLA-A2/DR1 mice the immunogenicity of the Melan-AA27L LSP or the Melan-A26. 35 AA27l short synthetic peptide (SSP) used in conjunction with the saponin adjuvant QS21, aiming to identify a potent adjuvant formulation that elicits a quantitatively and qualitatively strong immune response to tumour antigens. We showed a high CTL immune response elicited by the use of Melan-A LSP or SSP with QS21, which both exerted similar killing capacity upon in vivo transfer of target cells expressing the Melan-A peptide in the context of HLA-A2 molecules. However, the response generated by the LSP immunisation comprised higher percentages of CD8+T cells of the central memory phenotype (CD44hl CD62L+ and CCR7+ CD62L+) than those of SSP immunisation, and most importantly, the strong LSP+QS21 response was strictly CD4+T cell-dependent, as shown upon CD4 T cell depletion. Altogether, these results suggest that both XCLl and QS21 may enhance the ability of LSP to prime CD8 specific T cell responses, and promote a long-term memory response. Therefore, these observations may have important implications for the design of protein or LSP-based cancer vaccines for specific immunotherapy of cancer -- Les vacans thérapeutiques contre le cancer visent à induire une forte et durable réponse immunitaire contre des cellules cancéreuses résiduelles. Cette réponse requiert la collaboration entre le système immunitaire inné, en particulier les cellules dendrites (DCs), et le système immunitaire adaptatif, en l'occurrence les lymphocytes TCD4 hdper et CD8 cytotoxiques. La mise au point d'adjuvants et de molécules mimant un agent pathogène tels les ligands TLRs ou d'autres agents facilitant l'internalisation d'antigènes, est essentielle pour casser la tolérance du système immunitaire contre les cellules cancéreuses afin de générer une réponse effectrice et mémoire contre la tumeur. L'utilisation de longs peptides synthétiques (LSPs) est une approche prometteuse du fait que leur présentation en tant qu'antigénes requiert leur internalisation et leur transformation par les cellules dendrites (DCs, qui sont les mieux à même d'éviter la tolérance immunitaire. Récemment une sous-population de DCs exprimant le récepteur XCR1 a été décrite comme ayant une capacité supérieure dans la cross-présentation d'antigènes, d'où un intérêt à développer des vaccins ciblant les DCs exprimant le XCR1. Durant ma thèse de doctorat, j'ai exploré différentes approches pour optimiser les vaccins avec LSPs. La première partie visait à cibler les XCR1-DCs à l'aide de la chemokine XCL1 spécifique du récepteur XCR1, soit sou s la forme de protéine de fusion XCL1-OVA LSP-Fc, soit associée à des nanoparticules. La deuxième partie a consisté à tester l'association des LSPs avec I adjuvant QS21 dérivant d'une saponine dans le but d'optimiser l'internalisation cytosolique des longs peptides. Les protéines de fusion XCLl-OVA-Fc développées dans la première partie de mon travail, ont démontré leur capacité de liaison spécifique sur les XCRl-DCs associée à leur capacité de chemo-attractio. Lorsque inclues dans une mmunisation de souris porteuse de tumeurs établies, ces protéines de fusion XCL1-0VA LSP-Fc et XCLl-Fc plus OVA LSP ont induites une forte réponse CDS OVA spécifique permettant la complète régression des tumeurs de modèle EL4- 0VA et un retard de croissance significatif de tumeurs de type B16-0VA. Dans le but d'optimiser le drainage des LSPs vers es noyaux lymphatiques, nous avons également testé les LSPs fixés de manière covalente à des nanoparticules co- injectees ou non avec la chemokine XCL1. Cette formulation a également permis une forte réponse CD8 accompagnée d'un effet thérapeutique significatif, mais l'addition de la chemokine XCL1 n'a pas ajouté d'effet anti-tumeur supplémentaire. Dans la deuxième partie de ma thèse, j'ai comparé l'immunogénicité de l'antigène humain Melan A soit sous la forme d un LSP incluant un épitope CD4 et CD8 ou sous la forme d'un peptide ne contenant que l'épitope CD8 (SSP) Les peptides ont été formulés avec l'adjuvant QS21 et testés dans un modèle de souris transgéniques pour les MHC let II humains, respectivement le HLA-A2 et DR1. Les deux peptides LSP et SSP ont généré une forte réponse CD8 similaire assoc.ee a une capacité cytotoxique équivalente lors du transfert in vivo de cellules cibles présentant le peptide SSP' Cependant les souris immunisées avec le Melan A LSP présentaient un pourcentage plus élevé de CD8 ayant un Phénotype «centra, memory» (CD44h' CD62L+ and CCR7+ CD62L+) que les souris immunisées avec le SSP, même dix mois après I'immunisation. Par ailleurs, la réponse CD8 au Melan A LSP était strictement dépendante des lymphocytes CD4, contrairement à l'immunisation par le Melan A SSP qui n'était pas affectée. Dans l'ensemble ces résultats suggèrent que la chemokine XCL1 et l'adjuvant QS21 améliorent la réponse CD8 à un long peptide synthétique, favorisant ainsi le développement d'une réponse anti-tumeur mémoire durable. Ces observations pourraient être utiles au développement de nouveau vaccins thérapeutiques contre les tumeurs.
Resumo:
Several approaches have been developed to estimate both the relative and absolute rates of speciation and extinction within clades based on molecular phylogenetic reconstructions of evolutionary relationships, according to an underlying model of diversification. However, the macroevolutionary models established for eukaryotes have scarcely been used with prokaryotes. We have investigated the rate and pattern of cladogenesis in the genus Aeromonas (γ-Proteobacteria, Proteobacteria, Bacteria) using the sequences of five housekeeping genes and an uncorrelated relaxed-clock approach. To our knowledge, until now this analysis has never been applied to all the species described in a bacterial genus and thus opens up the possibility of establishing models of speciation from sequence data commonly used in phylogenetic studies of prokaryotes. Our results suggest that the genus Aeromonas began to diverge between 248 and 266 million years ago, exhibiting a constant divergence rate through the Phanerozoic, which could be described as a pure birth process.
Resumo:
Freshwater ecosystems and their biodiversity are presently seriously threatened by global development and population growth, leading to increases in nutrient inputs and intensification of eutrophication-induced problems in receiving fresh waters, particularly in lakes. Climate change constitutes another threat exacerbating the symptoms of eutrophication and species migration and loss. Unequivocal evidence of climate change impacts is still highly fragmented despite the intensive research, in part due to the variety and uncertainty of climate models and underlying emission scenarios but also due to the different approaches applied to study its effects. We first describe the strengths and weaknesses of the multi-faceted approaches that are presently available for elucidating the effects of climate change in lakes, including space-for-time substitution, time series, experiments, palaeoecology and modelling. Reviewing combined results from studies based on the various approaches, we describe the likely effects of climate changes on biological communities, trophic dynamics and the ecological state of lakes. We further discuss potential mitigation and adaptation measures to counteract the effects of climate change on lakes and, finally, we highlight some of the future challenges that we face to improve our capacity for successful prediction.
Resumo:
The linear prediction coding of speech is based in the assumption that the generation model is autoregresive. In this paper we propose a structure to cope with the nonlinear effects presents in the generation of the speech signal. This structure will consist of two stages, the first one will be a classical linear prediction filter, and the second one will model the residual signal by means of two nonlinearities between a linear filter. The coefficients of this filter are computed by means of a gradient search on the score function. This is done in order to deal with the fact that the probability distribution of the residual signal still is not gaussian. This fact is taken into account when the coefficients are computed by a ML estimate. The algorithm based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics and is based on blind deconvolution of Wiener systems [1]. Improvements in the experimental results with speech signals emphasize on the interest of this approach.
Resumo:
PURPOSE OF REVIEW: Current computational neuroanatomy based on MRI focuses on morphological measures of the brain. We present recent methodological developments in quantitative MRI (qMRI) that provide standardized measures of the brain, which go beyond morphology. We show how biophysical modelling of qMRI data can provide quantitative histological measures of brain tissue, leading to the emerging field of in-vivo histology using MRI (hMRI). RECENT FINDINGS: qMRI has greatly improved the sensitivity and specificity of computational neuroanatomy studies. qMRI metrics can also be used as direct indicators of the mechanisms driving observed morphological findings. For hMRI, biophysical models of the MRI signal are being developed to directly access histological information such as cortical myelination, axonal diameters or axonal g-ratio in white matter. Emerging results indicate promising prospects for the combined study of brain microstructure and function. SUMMARY: Non-invasive brain tissue characterization using qMRI or hMRI has significant implications for both research and clinics. Both approaches improve comparability across sites and time points, facilitating multicentre/longitudinal studies and standardized diagnostics. hMRI is expected to shed new light on the relationship between brain microstructure, function and behaviour, both in health and disease, and become an indispensable addition to computational neuroanatomy.
Resumo:
This final thesis project was carried out in the Industrial Management department of University of Applied Sciences Stadia for Forum Virium Helsinki. The purpose of this study was to answer to the question of how companies can use online customer community of co-creation in service development and what is the value gained from it. The paper combines a range of recently published theoretical works and ongoing customer community case development. The study aims to provide new information and action approaches to new service developers that may increase the success of the community building process. The paper also outlines the benefits of the use of online customer community and offers practical suggestions for maximizing the value gained from the community in service development projects. The concepts and suggestions introduced in the study appear to have notable new possibilities to the service development process but they have to be further tested empirically. This paper describes the online consumer community of co-creation to an important organizational process of innovation management suggesting that it possesses a great value to business. Online customer communities offer a potential of improving the success of new services or products enabling early, penetrable market entry and creating sustainable competitive advantage.
Resumo:
PURPOSE: According to estimations around 230 people die as a result of radon exposure in Switzerland. This public health concern makes reliable indoor radon prediction and mapping methods necessary in order to improve risk communication to the public. The aim of this study was to develop an automated method to classify lithological units according to their radon characteristics and to develop mapping and predictive tools in order to improve local radon prediction. METHOD: About 240 000 indoor radon concentration (IRC) measurements in about 150 000 buildings were available for our analysis. The automated classification of lithological units was based on k-medoids clustering via pair-wise Kolmogorov distances between IRC distributions of lithological units. For IRC mapping and prediction we used random forests and Bayesian additive regression trees (BART). RESULTS: The automated classification groups lithological units well in terms of their IRC characteristics. Especially the IRC differences in metamorphic rocks like gneiss are well revealed by this method. The maps produced by random forests soundly represent the regional difference of IRCs in Switzerland and improve the spatial detail compared to existing approaches. We could explain 33% of the variations in IRC data with random forests. Additionally, the influence of a variable evaluated by random forests shows that building characteristics are less important predictors for IRCs than spatial/geological influences. BART could explain 29% of IRC variability and produced maps that indicate the prediction uncertainty. CONCLUSION: Ensemble regression trees are a powerful tool to model and understand the multidimensional influences on IRCs. Automatic clustering of lithological units complements this method by facilitating the interpretation of radon properties of rock types. This study provides an important element for radon risk communication. Future approaches should consider taking into account further variables like soil gas radon measurements as well as more detailed geological information.