916 resultados para Topology-based methods
Resumo:
PURPOSE: We conducted a comprehensive review of the design, implementation, and outcome of first-in-human (FIH) trials of monoclonal antibodies (mAbs) to clearly determine early clinical development strategies for this class of compounds. METHODS: We performed a PubMed search using appropriate terms to identify reports of FIH trials of mAbs published in peer-reviewed journals between January 2000 and April 2013. RESULTS: A total of 82 publications describing FIH trials were selected for analysis. Only 27 articles (33%) reported the criteria used for selecting the starting dose (SD). Dose escalation was performed using rule-based methods in 66 trials (80%). The median number of planned dose levels was five (range, two to 13). The median of the ratio between the highest planned dose and the SD was 27 (range, two to 3,333). Although in 56 studies (68%) at least one grade 3 or 4 toxicity event was reported, no dose-limiting toxicity was observed in 47 trials (57%). The highest planned dose was reached in all trials, but the maximum-tolerated dose (MTD) was defined in only 13 studies (16%). The median of the ratio between MTD and SD was eight (range, four to 1,000). The recommended phase II dose was indicated in 34 studies (41%), but in 25 (73%) of these trials, this dose was chosen without considering toxicity as the main selection criterion. CONCLUSION: This literature review highlights the broad design heterogeneity of FIH trials testing mAbs. Because of the limited observed toxicity, the MTD was infrequently reached, and therefore, the recommended phase II dose for subsequent clinical trials was only tentatively defined.
Resumo:
Slab and cluster model spin-polarized calculations have been carried out to study various properties of isolated first-row transition metal atoms adsorbed on the anionic sites of the regular MgO(100) surface. The calculated adsorption energies follow the trend of the metal cohesive energies, indicating that the changes in the metal-support and metal-metal interactions along the series are dominated by atomic properties. In all cases, except for Ni at the generalized gradient approximation level, the number of unpaired electron is maintained as in the isolated metal atom. The energy required to change the atomic state from high to low spin has been computed using the PW91 and B3LYP density-functional-theory-based methods. PW91 fails to predict the proper ground state of V and Ni, but the results for the isolated and adsorbed atom are consistent within the method. B3LYP properly predicts the ground state of all first-row transition atom the high- to low-spin transition considered is comparable to experiment. In all cases, the interaction with the surface results in a reduced high- to low-spin transition energy.
Resumo:
BACKGROUND: Available methods to simulate nucleotide or amino acid data typically use Markov models to simulate each position independently. These approaches are not appropriate to assess the performance of combinatorial and probabilistic methods that look for coevolving positions in nucleotide or amino acid sequences. RESULTS: We have developed a web-based platform that gives a user-friendly access to two phylogenetic-based methods implementing the Coev model: the evaluation of coevolving scores and the simulation of coevolving positions. We have also extended the capabilities of the Coev model to allow for the generalization of the alphabet used in the Markov model, which can now analyse both nucleotide and amino acid data sets. The simulation of coevolving positions is novel and builds upon the developments of the Coev model. It allows user to simulate pairs of dependent nucleotide or amino acid positions. CONCLUSIONS: The main focus of our paper is the new simulation method we present for coevolving positions. The implementation of this method is embedded within the web platform Coev-web that is freely accessible at http://coev.vital-it.ch/, and was tested in most modern web browsers.
Resumo:
Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.
Resumo:
X-ray medical imaging is increasingly becoming three-dimensional (3-D). The dose to the population and its management are of special concern in computed tomography (CT). Task-based methods with model observers to assess the dose-image quality trade-off are promising tools, but they still need to be validated for real volumetric images. The purpose of the present work is to evaluate anthropomorphic model observers in 3-D detection tasks for low-contrast CT images. We scanned a low-contrast phantom containing four types of signals at three dose levels and used two reconstruction algorithms. We implemented a multislice model observer based on the channelized Hotelling observer (msCHO) with anthropomorphic channels and investigated different internal noise methods. We found a good correlation for all tested model observers. These results suggest that the msCHO can be used as a relevant task-based method to evaluate low-contrast detection for CT and optimize scan protocols to lower dose in an efficient way.
Resumo:
The detailed in-vivo characterization of subcortical brain structures is essential not only to understand the basic organizational principles of the healthy brain but also for the study of the involvement of the basal ganglia in brain disorders. The particular tissue properties of basal ganglia - most importantly their high iron content, strongly affect the contrast of magnetic resonance imaging (MRI) images, hampering the accurate automated assessment of these regions. This technical challenge explains the substantial controversy in the literature about the magnitude, directionality and neurobiological interpretation of basal ganglia structural changes estimated from MRI and computational anatomy techniques. My scientific project addresses the pertinent need for accurate automated delineation of basal ganglia using two complementary strategies: ? Empirical testing of the utility of novel imaging protocols to provide superior contrast in the basal ganglia and to quantify brain tissue properties; ? Improvement of the algorithms for the reliable automated detection of basal ganglia and thalamus Previous research demonstrated that MRI protocols based on magnetization transfer (MT) saturation maps provide optimal grey-white matter contrast in subcortical structures compared with the widely used Tl-weighted (Tlw) images (Helms et al., 2009). Under the assumption of a direct impact of brain tissue properties on MR contrast my first study addressed the question of the mechanisms underlying the regional specificities effect of the basal ganglia. I used established whole-brain voxel-based methods to test for grey matter volume differences between MT and Tlw imaging protocols with an emphasis on subcortical structures. I applied a regression model to explain the observed grey matter differences from the regionally specific impact of brain tissue properties on the MR contrast. The results of my first project prompted further methodological developments to create adequate priors for the basal ganglia and thalamus allowing optimal automated delineation of these structures in a probabilistic tissue classification framework. I established a standardized workflow for manual labelling of the basal ganglia, thalamus and cerebellar dentate to create new tissue probability maps from quantitative MR maps featuring optimal grey-white matter contrast in subcortical areas. The validation step of the new tissue priors included a comparison of the classification performance with the existing probability maps. In my third project I continued investigating the factors impacting automated brain tissue classification that result in interpretational shortcomings when using Tlw MRI data in the framework of computational anatomy. While the intensity in Tlw images is predominantly
Resumo:
During the past few years, a considerable number of research articles have been published relating to the structure and function of the major photosynthetic protein complexes, photosystem (PS) I, PSII, cytochrome (Cyt) b6f, and adenosine triphosphate (ATP) synthase. Sequencing of the Arabidopsis thaliana (Arabidopsis) genome together with several high-quality proteomics studies has, however, revealed that the thylakoid membrane network of plant chloroplasts still contains a number of functionally unknown proteins. These proteins may have a role as auxiliary proteins guiding the assembly, maintenance, and turnover of the thylakoid protein complexes, or they may be as yet unknown subunits of the photosynthetic complexes. Novel subunits are most likely to be found in the NAD(P)H dehydrogenase (NDH) complex, the structure and function of which have remained obscure in the absence of detailed crystallographic data, thus making this thylakoid protein complex a particularly interesting target of investigation. In this thesis, several novel thylakoid-associated proteins were identified by proteomics-based methods. The major goal of characterization of the stroma thylakoid associated polysome-nascent chain complexes was to determine the proteins that guide the dynamic life cycle of PSII. In addition, a large protein complex of ≥ 1,000 kDa, residing in the stroma thylakoid, was characterized in greater depth and it was found to be a supercomplex composed of the PSI and NDH complexes. A set of newly identified proteins from Arabidopsis thylakoids was subjected to detailed characterization using the reverse genetics approach and extensive biochemical and biophysical analysis. The role of the novel proteins, either as auxiliary proteins or subunits of the photosynthetic protein complexes, was revealed. Two novel thylakoid lumen proteins, TLP18.3 and AtCYP38, function as auxiliary proteins assisting specific steps of the assembly/repair of PSII. The role of the 10-kDa thylakoid lumen protein PsbR is related to the optimization of oxygen evolution of PSII by assisting the assembly of the PsbP protein. Two integral thylakoid membrane proteins, NDH45 and NDH48, are novel subunits of the chloroplast NDH complex. Finally, the thylakoid lumen immunophilin AtCYP20-2 is suggested to interact with the NDH complex, instead of PSII as was hypothesized earlier.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
Molecularly imprinted polymers (MIPs) consist of synthetic macromolecular matrix, obtained through molecular imprinting-based methods that show ability to selectively recognize important biological molecules and its application in the drug delivery field is under development. In the present review the main aspects related to the synthesis and characterization of MIPs are studied. The fundamental variables participating in the synthesis process, such as template molecule, functional monomers, cross-linking agents, solvents and imprinting approaches are discussed. Moreover, the main available methods for MIPs chemical and morphological characterization are presented and the importance of the obtained information is discussed.
Resumo:
Valupurseiden ja jäysteiden poistaminen on osa alumiinipainevalujen tuotantoprosessia. Työssä on tutkittu käytössä olevien ja uusien menetelmien mahdollisuuksia taloudellisempaan tuotantoon. Purseiden ja jäysteiden poistamiseen käytettävien menetelmien lisäksi tutkimuskohteita ja ideoita on haettu muista metallien työstömenetelmistä. Valupurseiden ja jäysteiden määritelmiä, muodostumista ja luokittelua on esitelty laajasti. Menetelmien tutkimus on painottunut valupurseiden poistamiseen ja valun jälkeistä leikkaamista on tutkittu erityisesti sisäpuolisten muotojen työstämiseen käytettyjen pistintyökalujen kautta. Muotin ulostyöntötapin purseen poistaminen on ollut tärkeä asia menetelmien tutkimuksissa. Valupurseiden, leikkaus- ja koneistusjäysteiden poistamiseksi lastuavista työstömenetelmistä tutkittuja ovat koneistaminen koneistuskeskuksella, aventaminen, hiertopuhallus, suihkuhiertäminen, vesisuihkuleikkaus, ultraäänityöstö, harjaus, painehiertäminen, hiominen kohdistetuilla ja kohdistamattomilla menetelmillä. Myös terminen jäysteenpoistomenetelmä (TEM), kemiallinen työstö (ECM) ja laserleikkaus on otettu esiin tutkimuksessa. Työn tuloksena on näkemys tutkittujen menetelmien jatkokehitystarpeesta ja mahdollisuudesta soveltaa niitä sarjatuotantoon.
Resumo:
Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.
Resumo:
Investment decision-making on far-reaching innovation ideas is one of the key challenges practitioners and academics face in the field of innovation management. However, the management practices and theories strongly rely on evaluation systems that do not fit in well with this setting. These systems and practices normally cannot capture the value of future opportunities under high uncertainty because they ignore the firm’s potential for growth and flexibility. Real options theory and options-based methods have been offered as a solution to facilitate decision-making on highly uncertain investment objects. Much of the uncertainty inherent in these investment objects is attributable to unknown future events. In this setting, real options theory and methods have faced some challenges. First, the theory and its applications have largely been limited to market-priced real assets. Second, the options perspective has not proved as useful as anticipated because the tools it offers are perceived to be too complicated for managerial use. Third, there are challenges related to the type of uncertainty existing real options methods can handle: they are primarily limited to parametric uncertainty. Nevertheless, the theory is considered promising in the context of far-reaching and strategically important innovation ideas. The objective of this dissertation is to clarify the potential of options-based methodology in the identification of innovation opportunities. The constructive research approach gives new insights into the development potential of real options theory under non-parametric and closeto- radical uncertainty. The distinction between real options and strategic options is presented as an explanans for the discovered limitations of the theory. The findings offer managers a new means of assessing future innovation ideas based on the frameworks constructed during the course of the study.
Resumo:
Tässä pro gradu – tutkielmassa tutkin kuvataiteen ja kuvataiteellisten menetelmien käyttöä organisaatioissa psykologisen omistajuuden tarpeiden ilmentäjänä. Tarkoituksenani on kuroa umpeen aukkoa tutkimuksen ja käytännön välillä mitä tulee kuvataiteen käyttöön organisaatioissa. Tavoitteena on selvittää, mitä lisäarvoa kuvataiteen käyttö tuo organisaatioille ja miten se ilmentää psykologista omistajuutta. Tutkimus on laadullista ja aineistona ovat strukturoimattomat haastattelut, jotka on analysoitu diskurssinanalyysillä. Haastatteluaineisosta löysin eritasoisia diskursseja. Päädiskurssi näkymättömästä näkyväksi ilmentää psykologiseen omistajuuteen motivoivista tarpeista stimuluksen tarvetta, tilan diskurssi ilmentää kodin tarvetta ja identiteetin diskurssi ilmentää identiteetin tarvetta. Tilan ja identiteetin diskurssit menevät osittain päällekkäin. Kuvataideteokset ilmentävät psykologisen omistajuuden motivaatiotarpeista erityisesti stimulusta. Ne toimivat stimuluksena tuomalla psykologista läheisyyttä organisaatioihin. Kuvataiteen käytöllä organisaatioissa saadaan näkymättömästä näkyväksi psykologiseen omistajuuteen motivoivia tarpeita. Kuvataideteokset tuovat psykologista läheisyyttä ja stimuloivat näihin liittyviä merkityksellisiä asioita. Kuvataide on esteettinen käytännön työkalu organisaatiokäyttäytymisen kehittämiseksi, tunnejohtamiseen fuusioissa ja henkilöstön sitouttamisee
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.