965 resultados para Mean Field Analysis
Resumo:
Tässä työssä on esitetty väsyttävän kuormituksen mittaamiseen ja mittausdatan jälkikäsittelyyn sekä väsymismitoitukseen liittyviä menetelmiä. Menetelmien sovelluskohteena oli metsäkoneen kuormain, joka on väsyttävästi kuormitettu hitsattu rakenne. Teoriaosassa on kuvattu väsymisilmiötä ja väsymismitoitusmenetelmiä sekä kuormitusten tunnistamiseen ja mittausten jälkikäsittelyyn liittyviä menetelmiä. Yleisimmin käytettyjen väsymismitoitusmenetelmien rinnalle on esitetty luotettavuuteen perustuvaa väsymismitoitusmenetelmää. Kuormainten suunnittelussa on keveys- j a kestoikävaatimusten takia erityisen suuri merkitys väsymisen huomioimisella. Rakenteille on ominaista tietyt toiminnan kannalta välttämättömät hitsatut yksityiskohdat, jotka usein määräävät koko rakenteen kestoiän. Koska nämä ongelmakohdat pystytään useimmiten tunnistamaan jo suunnitteluvaiheessa, voidaan yksityiskohtien muotoilulla usein parantaa huomattavasti koko rakenteen kestoikää. Näiden yksityiskohtien optimointi on osittain mahdollista toteuttaa ilman kuormituskertymätietoa, mutta useimmiten kuormitusten tunnistaminen on edellytys parhaan ratkaisun löytymiselle. Tällöin toistaiseksi paras keino todellisen väsyttävän kuormituksen tunnistamiseksi on pitkäaikaiset kenttämittaukset. Kenttämittauksilla selvitetään rakenteeseen kohdistuvat kuormitukset venymäliuskojen avulla. Kuormitusten tunnistamisella on erityisen suuri merkitys kun halutaan määrittää rakenteen kestoikä. Väsyminen ja väsyttävä kuormitus ovat kuitenkin tilastollisia muuttujia j a yksittäiselle rakenteelle ei ole mahdollista määrittää tarkkaa k estoikää. Tilastollisia menetelmiä käyttäen on kuitenkin mahdollista määrittää rakenteen vaurioitumisriski. Laskettaessa vaurioitumisriskiä suurelle määrälle yksittäisiä rakenteita voidaan muodostaa tarkkojakin ennusteita mahdollisten vaurioiden lukumäärästä. Tällöin kuormituskertymätiedosta voi olla tavanomaisen suunnittelun lisäksi laajempaa hyötyä esimerkiksi takuukäsittelyssä. Tässä työssä on sovellettu esitettyjä teorioita käytännössä metsäkoneen harvesterin puomiston väsymistarkasteluun. Kyseisen rakenteen kuormituksia mitattiin kahden viikon aikana yhteensä 35 tuntia, jonka perusteella laskettiin väsyttävän kuormituksen tilastollinen jakauma esimerkkitapaukselle. Mittauksen perusteella ei voitu tehdä kuitenkaan johtopäätöksiä tuotteen koko elinkaaren kuormituksista eikä muiden samanlaisten tuotteiden kuormituksista, koska mitattu otos oli suhteellisen lyhyt ja rajoittui vain yhteen käyttäjään ja muutamaan käyttökohteeseen. Menetelmien testaamiseksi kyseinen otos oli kuitenkin riittävä. Kuormituskertymätietoa käytettiin hyväksi myös laatumääritysten muodostamisessaesimerkkitapaukselle. Murtumismekaniikkaan perustuvalla menetelmällä arvioitiinharvesteripilarin valun mahdollisten valuvirheiden suurin sallittu koko. Luotettavuuteen pohjautuvan mitoitusmenettelyn tarve näyttää olevanlisääntymässä, joten pitkäaikaisten kenttämittausten tehokas hyödyntäminen tulee olemaan keskeinen osa väsymismitoitusta lähitulevaisuudessa. Menetelmiä olisi mahdollista tehostaa yhdistämällä kuormituskertymään erilaisia kuormitusten suhteen riippuvia tunnettuja suureita kuten käsiteltävän puun halkaisija. Todellisettuotekohtaiset tilastolliset jakaumat kuormituksista voitaisiin muodostaa mahdollisesti tehokkaammin, jos esimerkiksi kuormitusten riippuvuus metsätyypistä pystyttäisiin ensin määrittämään.
Resumo:
With the aim of monitoring the dynamics of the Livingston Island ice cap, the Departament de Geodinàmica i Geofísica of the Universitat de Barcelona began ye a r ly surveys in the austral summer of 1994-95 on Johnsons Glacier. During this field campaign 10 shallow ice cores were sampled with a manual ve rtical ice-core drilling machine. The objectives were: i) to detect the tephra layer accumulated on the glacier surface, attributed to the 1970 Deception Island pyroclastic eruption, today interstratified; ii) to verify wheter this layer might serve as a reference level; iii) to measure the 1 3 7Cs radio-isotope concentration accumulated in the 1965 snow stratum; iv) to use the isochrone layer as a mean of verifying the age of the 1970 tephra layer; and, v) to calculate both the equilibrium line of the glacier and average mass balance over the last 28 years (1965-1993). The stratigr a p hy of the cores, their cumulative density curves and the isothermal ice temperatures recorded confi rm that Johnsons Glacier is a temperate glacier. Wi n d, solar radiation heating and liquid water are the main agents controlling the ve rtical and horizontal redistribution of the volcanic and cryoclastic particles that are sedimented and remain interstratified within the g l a c i e r. It is because of this redistribution that the 1970 tephra layer does not always serve as a ve ry good reference level. The position of the equilibrium line altitude (ELA) in 1993, obtained by the 1 3 7Cs spectrometric analysis, varies from about 200 m a.s.l. to 250 m a.s.l. This indicates a rising trend in the equilibrium line altitude from the beginning of the 1970s to the present day. The va rying slope orientation of Johnsons Glacier relative to the prevailing NE wind gives rise to large local differences in snow accumulation, which locally modifies the equilibrium line altitude. In the cores studied, 1 3 7Cs appears to be associated with the 1970 tephra laye r. This indicates an intense ablation episode throughout the sampled area (at least up to 330 m a.s.l), which probably occurred synchronically to the 1970 tephra deposition or later. A rough estimate of the specific mass balance reveals a considerable accumulation gradient related to the increase with altitude.
Resumo:
We present an analytical procedure to perform the local noise analysis of a semiconductor junction when both the drift and diffusive parts of the current are important. The method takes into account space-inhomogeneous and hot-carriers conditions in the framework of the drift-diffusion model, and it can be effectively applied to the local noise analysis of different devices: n+nn+ diodes, Schottky barrier diodes, field-effect transistors, etc., operating under strongly inhomogeneous distributions of the electric field and charge concentration
Resumo:
We present an analytical procedure to perform the local noise analysis of a semiconductor junction when both the drift and diffusive parts of the current are important. The method takes into account space-inhomogeneous and hot-carriers conditions in the framework of the drift-diffusion model, and it can be effectively applied to the local noise analysis of different devices: n+nn+ diodes, Schottky barrier diodes, field-effect transistors, etc., operating under strongly inhomogeneous distributions of the electric field and charge concentration
Resumo:
Direct torque control (DTC) is a new control method for rotating field electrical machines. DTC controls directly the motor stator flux linkage with the stator voltage, and no stator current controllers are used. With the DTC method very good torque dynamics can be achieved. Until now, DTC has been applied to asynchronous motor drives. The purpose of this work is to analyse the applicability of DTC to electrically excited synchronous motor drives. Compared with asynchronous motor drives, electrically excited synchronous motor drives require an additional control for the rotor field current. The field current control is called excitation control in this study. The dependence of the static and dynamic performance of DTC synchronous motor drives on the excitation control has been analysed and a straightforward excitation control method has been developed and tested. In the field weakening range the stator flux linkage modulus must be reduced in order to keep the electro motive force of the synchronous motor smaller than the stator voltage and in order to maintain a sufficient voltage reserve. The dynamic performance of the DTC synchronous motor drive depends on the stator flux linkage modulus. Another important factor for the dynamic performance in the field weakening range is the excitation control. The field weakening analysis considers both dependencies. A modified excitation control method, which maximises the dynamic performance in the field weakening range, has been developed. In synchronous motor drives the load angle must be kept in a stabile working area in order to avoid loss of synchronism. The traditional vector control methods allow to adjust the load angle of the synchronous motor directly by the stator current control. In the DTC synchronous motor drive the load angle is not a directly controllable variable, but it is formed freely according to the motor’s electromagnetic state and load. The load angle can be limited indirectly by limiting the torque reference. This method is however parameter sensitive and requires a safety margin between the theoretical torque maximum and the actual torque limit. The DTC modulation principle allows however a direct load angle adjustment without any current control. In this work a direct load angle control method has been developed. The method keeps the drive stabile and allows the maximal utilisation of the drive without a safety margin in the torque limitation.
Resumo:
Choice of industrial development options and the relevant allocation of the research funds become more and more difficult because of the increasing R&D costs and pressure for shorter development period. Forecast of the research progress is based on the analysis of the publications activity in the field of interest as well as on the dynamics of its change. Moreover, allocation of funds is hindered by exponential growth in the number of publications and patents. Thematic clusters become more and more difficult to identify, and their evolution hard to follow. The existing approaches of research field structuring and identification of its development are very limited. They do not identify the thematic clusters with adequate precision while the identified trends are often ambiguous. Therefore, there is a clear need to develop methods and tools, which are able to identify developing fields of research. The main objective of this Thesis is to develop tools and methods helping in the identification of the promising research topics in the field of separation processes. Two structuring methods as well as three approaches for identification of the development trends have been proposed. The proposed methods have been applied to the analysis of the research on distillation and filtration. The results show that the developed methods are universal and could be used to study of the various fields of research. The identified thematic clusters and the forecasted trends of their development have been confirmed in almost all tested cases. It proves the universality of the proposed methods. The results allow for identification of the fast-growing scientific fields as well as the topics characterized by stagnant or diminishing research activity.
Resumo:
This thesis aims to find an effective way of conducting a target audience analysis (TAA) in cyber domain. There are two main focal points that are addressed; the nature of the cyber domain and the method of the TAA. Of the cyber domain the object is to find the opportunities, restrictions and caveats that result from its digital and temporal nature. This is the environment in which the TAA method is examined in this study. As the TAA is an important step of any psychological operation and critical to its success, the method used must cover all the main aspects affecting the choice of a proper target audience. The first part of the research was done by sending an open-ended questionnaire to operators in the field of information warfare both in Finland and abroad. As the results were inconclusive, the research was completed by assessing the applicability of United States Army Joint Publication FM 3-05.301 in the cyber domain via a theory-based content analysis. FM 3- 05.301 was chosen because it presents a complete method of the TAA process. The findings were tested against the results of the questionnaire and new scientific research in the field of psychology. The cyber domain was found to be “fast and vast”, volatile and uncontrollable. Although governed by laws to some extent, the cyber domain is unpredictable by nature and not controllable to reasonable amount. The anonymity and lack of verification often present in the digital channels mean that anyone can have an opinion, and any message sent may change or even be counterproductive to the original purpose. The TAA method of the FM 3-05.301 is applicable in the cyber domain, although some parts of the method are outdated and thus suggested to be updated if used in that environment. The target audience categories of step two of the process were replaced by new groups that exist in the digital environment. The accessibility assessment (step eight) was also redefined, as in the digital media the mere existence of a written text is typically not enough to convey the intended message to the target audience. The scientific studies made in computer sciences and both in psychology and sociology about the behavior of people in social media (and overall in cyber domain) call for a more extensive remake of the TAA process. This falls, however, out of the scope of this work. It is thus suggested that further research should be carried out in search of computer-assisted methods and a more thorough TAA process, utilizing the latest discoveries of human behavior. ---------------------------------------------------------------------------------------------------------------------------------- Tämän opinnäytetyön tavoitteena on löytää tehokas tapa kohdeyleisöanalyysin tekemiseksi kybertoimintaympäristössä. Työssä keskitytään kahteen ilmiöön: kybertoimintaympäristön luonteeseen ja kohdeyleisöanalyysin metodiin. Kybertoimintaympäristön osalta tavoitteena on löytää sen digitaalisesta ja ajallisesta luonteesta juontuvat mahdollisuudet, rajoitteet ja sudenkuopat. Tämä on se ympäristö jossa kohdeyleisöanalyysiä tarkastellaan tässä työssä. Koska kohdeyleisöanalyysi kuuluu olennaisena osana jokaiseen psykologiseen operaatioon ja on onnistumisen kannalta kriittinen tekijä, käytettävän metodin tulee pitää sisällään kaikki oikean kohdeyleisön valinnan kannalta merkittävät osa-alueet. Tutkimuksen ensimmäisessä vaiheessa lähetettiin avoin kysely informaatiosodankäynnin ammattilaisille Suomessa ja ulkomailla. Koska kyselyn tulokset eivät olleet riittäviä johtopäätösten tekemiseksi, tutkimusta jatkettiin tarkastelemalla Yhdysvaltojen armeijan kenttäohjesäännön FM 3-05.301 soveltuvuutta kybertoimintaympäristössä käytettäväksi teorialähtöisen sisällönanalyysin avulla. FM 3-05.301 valittiin koska se sisältää kokonaisvaltaisen kohdeyleisöanalyysiprosessin. Havaintoja verrattiin kyselytutkimuksen tuloksiin ja psykologian uusiin tutkimuksiin. Kybertoimintaympäristö on tulosten perusteella nopea ja valtava, jatkuvasti muuttuva ja kontrolloimaton. Vaikkakin lait hallitsevat kybertoimintaympäristöä jossakin määrin, on se silti luonteeltaan ennakoimaton eikä sitä voida luotettavasti hallita. Digitaalisilla kanavilla usein läsnäoleva nimettömyys ja tiedon tarkastamisen mahdottomuus tarkoittavat että kenellä tahansa voi olla mielipide asioista, ja mikä tahansa viesti voi muuttua, jopa alkuperäiseen tarkoitukseen nähden vastakkaiseksi. FM 3-05.301:n metodi toimii kybertoimintaympäristössä, vaikkakin jotkin osa-alueet ovat vanhentuneita ja siksi ne esitetään päivitettäväksi mikäli metodia käytetään kyseisessä ympäristössä. Kohdan kaksi kohdeyleisökategoriat korvattiin uusilla, digitaalisessa ympäristössä esiintyvillä ryhmillä. Lähestyttävyyden arviointi (kohta 8) muotoiltiin myös uudestaan, koska digitaalisessa mediassa pelkkä tekstin läsnäolo ei sellaisenaan tyypillisesti vielä riitä halutun viestin välittämiseen kohdeyleisölle. Tietotekniikan edistyminen ja psykologian sekä sosiologian aloilla tehty tieteellinen tutkimus ihmisten käyttäytymisestä sosiaalisessa mediassa (ja yleensä kybertoimintaympäristössä) mahdollistavat koko kohdeyleisöanalyysiprosessin uudelleenrakentamisen. Tässä työssä sitä kuitenkaan ei voida tehdä. Siksi esitetäänkin että lisätutkimusta tulisi tehdä sekä tietokoneavusteisten prosessien että vielä syvällisempien kohdeyleisöanalyysien osalta, käyttäen hyväksi viimeisimpiä ihmisen käyttäytymiseen liittyviä tutkimustuloksia.
Resumo:
One dune habitat in the semi-arid Caatinga Biome, rich in endemisms, is described based on plant species composition, woody plant density, mean height and phenology and a multivariate analysis of the micro-habitats generated by variables associated to plants and topography. The local flora is composed mainly by typically sand-dweller species of Caatinga, suggesting the existence of a phytogeographic unity related to the sandy areas in the Caatinga biome, which seems to be corroborated by faunal distribution. Moreover, some species are probably endemic from the dunes, a pattern also found in vertebrates. The plant distribution is patchy, there is no conspicuous herbaceous layer and almost 50% of the ground represents exposed sand. Phenology is not synchronized among species, occurring leaves budding and shedding, flowers development and anthesis, fruits production and dispersion both in rainy and dry seasons. Leaf shedding is low compared to the level usually observed in Caatinga areas and about 50% of the woody individuals were producing leaves in both seasons. Spectrum of dispersal syndromes shows an unexpected higher proportion of zoochorous species among the phanerophytes, accounting for 31.3% of the species, 78.7% of the total frequency and 78.6% of the total density. The habitat of the dunes is very simple and homogeneous in structure and most of environmental variance in the area is explained by one gradient of woody plants density and another of increase of Bromelia antiacantha Bertol. (Bromeliaceae) and Tacinga inamoena (K. Schum.) N.P. Taylor & Stuppy (Cactaceae) toward valleys, which seem to determine two kinds of protected micro-habitats for the small cursorial fauna.
Resumo:
Nineteen-channel EEGs were recorded from the scalp surface of 30 healthy subjects (16 males and 14 females, mean age: 34 years, SD: 11.7 years) at rest and under trains of intermittent photic stimulation (IPS) at rates of 5, 10 and 20 Hz. Digitalized data were submitted to spectral analysis with fast fourier transformation providing the basis for the computation of global field power (GFP). For quantification, GFP values in the frequency ranges of 5, 10 and 20 Hz at rest were divided by the corresponding data obtained under IPS. All subjects showed a photic driving effect at each rate of stimulation. GFP data were normally distributed, whereas ratios from photic driving effect data showed no uniform behavior due to high interindividual variability. Suppression of alpha-power after IPS with 10 Hz was observed in about 70% of the volunteers. In contrast, ratios of alpha-power were unequivocal in all subjects: IPS at 20 Hz always led to a suppression of alpha-power. Dividing alpha-GFP with 20-Hz IPS by alpha-GFP at rest (R = alpha-GFP IPS/alpha-GFPrest) thus resulted in ratios lower than 1. We conclude that ratios from GFP data with 20-Hz IPS may provide a suitable paradigm for further investigations.
Resumo:
Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.
Resumo:
A numerical study is presented of the third-dimensional Gaussian random-field Ising model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being nonspanning, one-dimensional-spanning, two-dimensional-spanning, or three-dimensional-spanning depending on whether or not they span the whole lattice in different space directions. Moreover, finite-size scaling analysis enables identification of two different types of nonspanning avalanches (critical and noncritical) and two different types of three-dimensional-spanning avalanches (critical and subcritical), whose numbers increase with L as a power law with different exponents. We conclude by giving a scenario for avalanche behavior in the thermodynamic limit.
Resumo:
Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.
Resumo:
Resumen tomado de la publicación. Resumen también en inglés