977 resultados para Non-linear map
Resumo:
Tuotantotehokkuus näyttelee yhä suurempaa roolia teollisuudessa, minkä vuoksi myös pakkauslinjastoille joudutaan asettamaan suuria vaatimuksia. Usein leikkaus- ja kappaleensiirtosovelluksissa käytetään lineaarisia ruuvikäyttöjä, jotka voitaisiin tietyin edellytyksin korvata halvemmilla ja osittain suorituskykyisimmillä hammashihnavetoisilla johteilla. Yleensä paikkasäädetty työsolu muodostuu kahden tai kolmen eri koordinaatistoakselin suuntaan asennetuista johteista. Tällaisen työsolun paikoitustarkkuuteen vaikuttavat muun muassa käytetty säätörakenne, moottorisäätöketjun viiveet, sekä laitteiston eri epälineaarisuudet, kuten kitka. Tässä työssä esitetään lineaarisen hammashihnaservokäytön dynaamista käytöstä kuvaava matemaattinen malli ja laaditaan mallin pohjalta laitteen simulointimalli. Mallin toimivuus varmistetaan käytännön identifiointitesteillä. Lisäksi työssä tutkitaan, kuinka hyvään suorituskykyyn lineaarinen hammashihnaservokäyttö kykenee, jos teollisuudessa paikoitussäätörakenteena tyypillisesti käytetty kaskadirakenne tai PID-rakenne korvataan kehittyneemmällä mallipohjaisella tilasäädinrakenteella. Säädön toimintaa arvioidaan simulointien ja koelaitteistolla suoritettavien mittausten perusteella.
Resumo:
Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.
Resumo:
The primary objective is to identify the critical factors that have a natural impact on the performance measurement system. It is important to make correct decisions related to measurement systems, which are based on the complex business environment. The performance measurement system is combined with a very complex non-linear factor. The Six Sigma methodology is seen as one potential approach at every organisational level. It will be linked to the performance and financial measurement as well as to the analytical thinking on which the viewpoint of management depends. The complex systems are connected to the customer relationship study. As the primary throughput can be seen in a new well-defined performance measurement structure that will also be facilitated as will an analytical multifactor system. These critical factors should also be seen as a business innovation opportunity at the same time. This master's thesis has been divided into two different theoretical parts. The empirical part consists of both action-oriented and constructive research approaches with an empirical case study. The secondary objective is to seek a competitive advantage factor with a new analytical tool and the Six Sigma thinking. Process and product capabilities will be linked to the contribution of complex system. These critical barriers will be identified by the performance measuring system. The secondary throughput can be recognised as the product and the process cost efficiencies which throughputs are achieved with an advantage of management. The performance measurement potential is related to the different productivity analysis. Productivity can be seen as one essential part of the competitive advantage factor.
Resumo:
STUDY QUESTION: What are the long term trends in the total (live births, fetal deaths, and terminations of pregnancy for fetal anomaly) and live birth prevalence of neural tube defects (NTD) in Europe, where many countries have issued recommendations for folic acid supplementation but a policy for mandatory folic acid fortification of food does not exist? METHODS: This was a population based, observational study using data on 11 353 cases of NTD not associated with chromosomal anomalies, including 4162 cases of anencephaly and 5776 cases of spina bifida from 28 EUROCAT (European Surveillance of Congenital Anomalies) registries covering approximately 12.5 million births in 19 countries between 1991 and 2011. The main outcome measures were total and live birth prevalence of NTD, as well as anencephaly and spina bifida, with time trends analysed using random effects Poisson regression models to account for heterogeneities across registries and splines to model non-linear time trends. SUMMARY ANSWER AND LIMITATIONS: Overall, the pooled total prevalence of NTD during the study period was 9.1 per 10 000 births. Prevalence of NTD fluctuated slightly but without an obvious downward trend, with the final estimate of the pooled total prevalence of NTD in 2011 similar to that in 1991. Estimates from Poisson models that took registry heterogeneities into account showed an annual increase of 4% (prevalence ratio 1.04, 95% confidence interval 1.01 to 1.07) in 1995-99 and a decrease of 3% per year in 1999-2003 (0.97, 0.95 to 0.99), with stable rates thereafter. The trend patterns for anencephaly and spina bifida were similar, but neither anomaly decreased substantially over time. The live birth prevalence of NTD generally decreased, especially for anencephaly. Registration problems or other data artefacts cannot be excluded as a partial explanation of the observed trends (or lack thereof) in the prevalence of NTD. WHAT THIS STUDY ADDS: In the absence of mandatory fortification, the prevalence of NTD has not decreased in Europe despite longstanding recommendations aimed at promoting peri-conceptional folic acid supplementation and existence of voluntary folic acid fortification. FUNDING, COMPETING INTERESTS, DATA SHARING: The study was funded by the European Public Health Commission, EUROCAT Joint Action 2011-2013. HD and ML received support from the European Commission DG Sanco during the conduct of this study. No additional data available.
Accelerated Microstructure Imaging via Convex Optimisation for regions with multiple fibres (AMICOx)
Resumo:
This paper reviews and extends our previous work to enable fast axonal diameter mapping from diffusion MRI data in the presence of multiple fibre populations within a voxel. Most of the existing mi-crostructure imaging techniques use non-linear algorithms to fit their data models and consequently, they are computationally expensive and usually slow. Moreover, most of them assume a single axon orientation while numerous regions of the brain actually present more complex configurations, e.g. fiber crossing. We present a flexible framework, based on convex optimisation, that enables fast and accurate reconstructions of the microstructure organisation, not limited to areas where the white matter is coherently oriented. We show through numerical simulations the ability of our method to correctly estimate the microstructure features (mean axon diameter and intra-cellular volume fraction) in crossing regions.
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
BACKGROUND: The most important adverse effect of BoNT-A is the systemic diffusion of the toxin. There is some evidence that the administration of high doses can increase the risk of systemic diffusion and the development of clinically evident adverse effects, however an international consensus does not exist about its maximum dose. AIM: The aim of this study was to evaluate changes in autonomic heart drive induced by high doses (higher than 600 units) of incobotulinumtoxinA injection in spastic stroke patients. Moreover, the treatment safety by monitoring adverse events occurrence was assessed. DESIGN: Case control study. POPULATION: Eleven stroke survivors with spastic hemiplegia. METHODS: Patients were treated with intramuscular focal injections of IncobotulinumtoxinA (NT 201; Xeomin®, Merz Pharmaceuticals GmbH, Frankfurt, Germany). Doses were below 12 units/Kg. Each patient underwent an ECG recording before injection and 10 days after treatment. Linear and non-linear Heart Rate variability (HRV) measures were derived from ECGs with a dedicated software. RESULTS: None of the variable considered showed statistically significant changes after BoNT-A injection. CONCLUSION: The use of incobotulinumtoxinA in adult patients at doses up to 12 units/kg seems to be safe regarding autonomic heart drive. CLINICAL REHABILITATION IMPACT: The use of IncobotulinumtoxinA up to 600 units could be a safe therapeutic option in spastic hemiplegic stroke survivors.
Resumo:
ISSUES: There have been reviews on the association between density of alcohol outlets and harm including studies published up to December 2008. Since then the number of publications has increased dramatically. The study reviews the more recent studies with regard to their utility to inform policy. APPROACH: A systematic review found more than 160 relevant studies (published between January 2009 and October 2014). The review focused on: (i) outlet density and assaultive or intimate partner violence; (ii) studies including individual level data; or (iii) 'natural experiments'. KEY FINDINGS: Despite overall evidence for an association between density and harm, there is little evidence on causal direction (i.e. whether demand leads to more supply or increased availability increases alcohol use and harm). When outlet types (e.g. bars, supermarkets) are analysed separately, studies are too methodologically diverse and partly contradictory to permit firm conclusions besides those pertaining to high outlet densities in areas such as entertainment districts. Outlet density commonly had little effect on individual-level alcohol use, and the few 'natural experiments' on restricting densities showed little or no effects. IMPLICATIONS AND CONCLUSIONS: Although outlet densities are likely to be positively related to alcohol use and harm, few policy recommendations can be given as effects vary across study areas, outlet types and outlet cluster size. Future studies should examine in detail outlet types, compare different outcomes associated with different strengths of association with alcohol, analyse non-linear effects and compare different methodologies. Purely aggregate-level studies examining total outlet density only should be abandoned. [Gmel G, Holmes J, Studer J. Are alcohol outlet densities strongly associated with alcohol-related outcomes? A critical review of recent evidence. Drug Alcohol Rev 2015].
Resumo:
OBJECTIVES: Different accelerometer cutpoints used by different researchers often yields vastly different estimates of moderate-to-vigorous intensity physical activity (MVPA). This is recognized as cutpoint non-equivalence (CNE), which reduces the ability to accurately compare youth MVPA across studies. The objective of this research is to develop a cutpoint conversion system that standardizes minutes of MVPA for six different sets of published cutpoints. DESIGN: Secondary data analysis. METHODS: Data from the International Children's Accelerometer Database (ICAD; Spring 2014) consisting of 43,112 Actigraph accelerometer data files from 21 worldwide studies (children 3-18 years, 61.5% female) were used to develop prediction equations for six sets of published cutpoints. Linear and non-linear modeling, using a leave one out cross-validation technique, was employed to develop equations to convert MVPA from one set of cutpoints into another. Bland Altman plots illustrate the agreement between actual MVPA and predicted MVPA values. RESULTS: Across the total sample, mean MVPA ranged from 29.7MVPAmind(-1) (Puyau) to 126.1MVPAmind(-1) (Freedson 3 METs). Across conversion equations, median absolute percent error was 12.6% (range: 1.3 to 30.1) and the proportion of variance explained ranged from 66.7% to 99.8%. Mean difference for the best performing prediction equation (VC from EV) was -0.110mind(-1) (limits of agreement (LOA), -2.623 to 2.402). The mean difference for the worst performing prediction equation (FR3 from PY) was 34.76mind(-1) (LOA, -60.392 to 129.910). CONCLUSIONS: For six different sets of published cutpoints, the use of this equating system can assist individuals attempting to synthesize the growing body of literature on Actigraph, accelerometry-derived MVPA.
Resumo:
The least square method is analyzed. The basic aspects of the method are discussed. Emphasis is given in procedures that allow a simple memorization of the basic equations associated with the linear and non linear least square method, polinomial regression and multilinear method.
Resumo:
We describe the preparation and some optical properties of high refractive index TeO2-PbO-TiO2 glass system. Highly homogeneous glasses were obtained by agitating the mixture during the melting process in an alumina crucible. The characterization was done by X-ray diffraction, Raman scattering, light absorption and linear refractive index measurements. The results show a change in the glass structure as the PbO content increases: the TeO4 trigonal bipyramids characteristics of TeO2 glasses transform into TeO3 trigonal pyramids. However, the measured refractive indices are almost independent of the glass composition. We show that third-order nonlinear optical susceptibilities calculated from the measured refractive indices using Lines' theoretical model are also independent of the glass composition.
Resumo:
In this work we describe the synthesis and characterization of chalcogenide glass (0.3La2S3-0.7Ga2S 3) with low phonons frequencies. Several properties were measured like Sellmeier parameters, linear refractive index dispersion and material dispersion. Samples with the composition above were doped with Dy2S3. The absorption and emission characteristics were measured by electronic spectroscopy and fluorescence spectrum respectively. Raman and infrared spectroscopy shows that these glasses present low phonons frequencies and strucuture composed by GaS4 tetrahedrals. The Lines model was used for calculate the coefficients values of the non linear refractive index.
Resumo:
Tässä työssä kehitettiin palo- ja pelastuskäyttöön tarkoitettuun henkilönostimeen teleskooppipuomin profiilit. Profiilien valmistusmateriaalina oli kuumavalssattu, ultraluja säänkestävä rakenneteräs. Työssä kehitettiin standardien ja ohjeiden pohjalta laskentapohja, jolla voidaan tutkia teleskooppipuomin jaksojen tukireaktioita, taivutus- ja vääntömomentteja ja leikkaus ja normaalivoimia. Laskentapohjassa voidaan varioida eri kuormitusten suuntia, teleskooppipuomin sivusuuntaista ulottumaa ja nostokulmaa. Profiilien alustavassa mitoituksessa hyödynnettiin paikallisen lommahduksen huomioon ottavia standardeja ja suunnitteluohjeita. Eri poikkileikkausten ominaisuuksia verrattiin keskenään ja profiili valittiin yhdessä kohdeyrityksen kanssa. Alustavan mitoituksen yhteydessä muodostettiin apuohjelma valitulle poikkileikkaukselle, jolla voitiin tutkia profiilin eri muuttujien vaikutusta mm. paikalliseen lommahdukseen ja jäykkyyteen. Laskentapohjaan sisällytettiin myös optimointirutiini, jolla voitiin minimoida poikkileikkauksen pinta-ala ja tätä kautta profiilin massa. Lopullinen mitoitus suoritettiin elementtimenetelmällä. Mitoituksessa tutkittiin alustavasti mitoitettujen profiilien paikallista lommahdusta lineaarisen stabiilius- ja epälineaarisen analyysin pohjalta. Profiilien jännityksiä tutkittiin tarkemmin mm. varioimalla kuormituksia ja osittelemalla elementtien normaalijännityksiä. Diplomityössä kehitetyllä ja analysoidulla teleskooppipuomilla voitiin keventää jaksojen painoja 15-30 %. Sivusuuntainen ulottuma parani samalla lähes 20 % ja nimelliskuorma kasvoi 25 %.
Resumo:
One of the main problems in quantitative analysis of complex samples by x-ray fluorescence is related to interelemental (or matrix) effects. These effects appear as a result of interactions among sample elements, affecting the x-ray emission intensity in a non-linear manner. Basically, two main effects occur; intensity absorption and enhancement. The combination of these effects can lead to serious problems. Many studies have been carried out proposing mathematical methods to correct for these effects. Basic concepts and the main correction methods are discussed here.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.