908 resultados para Algorithm Analysis and Problem Complexity
Resumo:
Interaction analysis is not a prerogative of any discipline in social sciences. It has its own history within each disciplinary field and is related to specific research objects. From the standpoint of psychology, this article first draws upon a distinction between factorial and dialogical conceptions of interaction. It then briefly presents the basis of a dialogical approach in psychology and focuses upon four basic assumptions. Each of them is examined on a theoretical and on a methodological level with a leading question: to what extent is it possible to develop analytical tools that are fully coherent with dialogical assumptions? The conclusion stresses the difficulty of developing methodological tools that are fully consistent with dialogical assumptions and argues that there is an unavoidable tension between accounting for the complexity of an interaction and using methodological tools which necessarily "monologise" this complexity.
Resumo:
AbstractIn addition to genetic changes affecting the function of gene products, changes in gene expression have been suggested to underlie many or even most of the phenotypic differences among mammals. However, detailed gene expression comparisons were, until recently, restricted to closely related species, owing to technological limitations. Thus, we took advantage of the latest technologies (RNA-Seq) to generate extensive qualitative and quantitative transcriptome data for a unique collection of somatic and germline tissues from representatives of all major mammalian lineages (placental mammals, marsupials and monotremes) and birds, the evolutionary outgroup.In the first major project of my thesis, we performed global comparative analyses of gene expression levels based on these data. Our analyses provided fundamental insights into the dynamics of transcriptome change during mammalian evolution (e.g., the rate of expression change across species, tissues and chromosomes) and allowed the exploration of the functional relevance and phenotypic implications of transcription changes at a genome-wide scale (e.g., we identified numerous potentially selectively driven expression switches).In a second project of my thesis, which was also based on the unique transcriptome data generated in the context of the first project we focused on the evolution of alternative splicing in mammals. Alternative splicing contributes to transcriptome complexity by generating several transcript isoforms from a single gene, which can, thus, perform various functions. To complete the global comparative analysis of gene expression changes, we explored patterns of alternative splicing evolution. This work uncovered several general and unexpected patterns of alternative splicing evolution (e.g., we found that alternative splicing evolves extremely rapidly) as well as a large number of conserved alternative isoforms that may be crucial for the functioning of mammalian organs.Finally, the third and final project of my PhD consisted in analyzing in detail the unique functional and evolutionary properties of the testis by exploring the extent of its transcriptome complexity. This organ was previously shown to evolve rapidly both at the phenotypic and molecular level, apparently because of the specific pressures that act on this organ and are associated with its reproductive function. Moreover, my analyses of the amniote tissue transcriptome data described above, revealed strikingly widespread transcriptional activity of both functional and nonfunctional genomic elements in the testis compared to the other organs. To elucidate the cellular source and mechanisms underlying this promiscuous transcription in the testis, we generated deep coverage RNA-Seq data for all major testis cell types as well as epigenetic data (DNA and histone methylation) using the mouse as model system. The integration of these complete dataset revealed that meiotic and especially post-meiotic germ cells are the major contributors to the widespread functional and nonfunctional transcriptome complexity of the testis, and that this "promiscuous" spermatogenic transcription is resulting, at least partially, from an overall transcriptionally permissive chromatin state. We hypothesize that this particular open state of the chromatin results from the extensive chromatin remodeling that occurs during spermatogenesis which ultimately leads to the replacement of histones by protamines in the mature spermatozoa. Our results have important functional and evolutionary implications (e.g., regarding new gene birth and testicular gene expression evolution).Generally, these three large-scale projects of my thesis provide complete and massive datasets that constitute valuables resources for further functional and evolutionary analyses of mammalian genomes.
Resumo:
The spatial variability of soil and plant properties exerts great influence on the yeld of agricultural crops. This study analyzed the spatial variability of the fertility of a Humic Rhodic Hapludox with Arabic coffee, using principal component analysis, cluster analysis and geostatistics in combination. The experiment was carried out in an area under Coffea arabica L., variety Catucai 20/15 - 479. The soil was sampled at a depth 0.20 m, at 50 points of a sampling grid. The following chemical properties were determined: P, K+, Ca2+, Mg2+, Na+, S, Al3+, pH, H + Al, SB, t, T, V, m, OM, Na saturation index (SSI), remaining phosphorus (P-rem), and micronutrients (Zn, Fe, Mn, Cu and B). The data were analyzed with descriptive statistics, followed by principal component and cluster analyses. Geostatistics were used to check and quantify the degree of spatial dependence of properties, represented by principal components. The principal component analysis allowed a dimensional reduction of the problem, providing interpretable components, with little information loss. Despite the characteristic information loss of principal component analysis, the combination of this technique with geostatistical analysis was efficient for the quantification and determination of the structure of spatial dependence of soil fertility. In general, the availability of soil mineral nutrients was low and the levels of acidity and exchangeable Al were high.
Resumo:
Performance-related pay within public organizations is continuing to spread. Although it can help to strengthen an entrepreneurial spirit in civil servants, its implementation is marred by technical, financial, managerial and cultural problems. This article identifies an added problem, namely the contradiction that exists between a managerial discourse that emphasizes the team and collective performance, on the one hand, and the use of appraisal and reward tools that are above all individual, on the other. Based on an empirical survey carried out within Swiss public organizations, the analysis shows that the team is currently rarely taken into account and singles out the principal routes towards an integrated system for the management and rewarding of civil servants.
Resumo:
The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
Tämän työn tarkoituksena on koota yhteen selluprosessin mittausongelmat ja mahdolliset mittaustekniikat ongelmien ratkaisemiseksi. Pääpaino on online-mittaustekniikoissa. Työ koostuu kolmesta osasta. Ensimmäinen osa on kirjallisuustyö, jossa esitellään nykyaikaisen selluprosessin perusmittaukset ja säätötarpeet. Mukana on koko kuitulinja puunkäsittelystä valkaisuun ja kemikaalikierto: haihduttamo, soodakattila, kaustistamo ja meesauuni. Toisessa osassa mittausongelmat ja mahdolliset mittaustekniikat on koottu yhteen ”tiekartaksi”. Tiedot on koottu vierailemalla kolmella suomalaisella sellutehtaalla ja haastattelemalla laitetekniikka- ja mittaustekniikka-asiantuntijoita. Prosessikemian paremmalle ymmärtämiselle näyttää haastattelun perusteella olevan tarvetta, minkä vuoksi konsentraatiomittaukset on valittu jatkotutkimuskohteeksi. Viimeisessä osassa esitellään mahdollisia mittaustekniikoita konsentraatiomittausten ratkaisemiseksi. Valitut tekniikat ovat lähi-infrapunatekniikka (NIR), fourier-muunnosinfrapunatekniikka (FTIR), online-kapillaarielektroforeesi (CE) ja laserindusoitu plasmaemissiospektroskopia (LIPS). Kaikkia tekniikoita voi käyttää online-kytkettyinä prosessikehitystyökaluina. Kehityskustannukset on arvioitu säätöön kytketylle online-laitteelle. Kehityskustannukset vaihtelevat nollasta miestyövuodesta FTIR-tekniikalle viiteen miestyövuoteen CE-laitteelle; kehityskustannukset riippuvat tekniikan kehitysasteesta ja valmiusasteesta tietyn ongelman ratkaisuun. Työn viimeisessä osassa arvioidaan myös yhden mittausongelman – pesuhäviömittauksen – ratkaisemisen teknis-taloudellista kannattavuutta. Ligniinipitoisuus kuvaisi nykyisiä mittauksia paremmin todellista pesuhäviötä. Nykyään mitataan joko natrium- tai COD-pesuhäviötä. Ligniinipitoisuutta voidaan mitata UV-absorptiotekniikalla. Myös CE-laitetta voitaisiin käyttää pesuhäviön mittauksessa ainakin prosessikehitysvaiheessa. Taloudellinen tarkastelu pohjautuu moniin yksinkertaistuksiin ja se ei sovellu suoraan investointipäätösten tueksi. Parempi mittaus- ja säätöjärjestelmä voisi vakauttaa pesemön ajoa. Investointi ajoa vakauttavaan järjestelmään on kannattavaa, jos todellinen ajotilanne on tarpeeksi kaukana kustannusminimistä tai jos pesurin ajo heilahtelee eli pesuhäviön keskihajonta on suuri. 50 000 € maksavalle mittaus- ja säätöjärjestelmälle saadaan alle 0,5 vuoden takaisinmaksuaika epävakaassa ajossa, jos COD-pesuhäviön vaihteluväli on 5,2 – 11,6 kg/odt asetusarvon ollessa 8,4 kg/odt. Laimennuskerroin vaihtelee tällöin välillä 1,7 – 3,6 m3/odt asetusarvon ollessa 2,5 m3/odt.
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.
Resumo:
Papperstillverkningen störs ofta av oönskade föreningar som kan bilda avsättningar på processytor, vilket i sin tur kan ge upphov till störningar i pappersproduktionen samt försämring av papperskvaliteten. Förutom avsättningar av vedharts är stenliknande avlagringar av svårlösliga salter vanliga. I vårt dagliga liv är kalkavlagringar i kaffe- och vattenkokare exempel på liknande problem. I massa- och pappersindustrin är en av de mest problematiska föreningarna kalciumoxalat; detta salt är nästan olösligt i vatten och avlagringarna är mycket svåra att avlägsna. Kalciumoxalat är också känt som en av orsakerna till njurstenar hos människor. Veden och speciellt barken innehåller alltid en viss mängd oxalat men en större källa är oxalsyra som bildas när massan bleks med oxiderande kemikalier, t.ex. väteperoxid. Kalciumoxalat bildas när oxalsyran reagerar med kalcium som kommer in i processen med råvattnet, veden eller olika tillsatsmedel. I denna avhandling undersöktes faktorer som påverkar bildningen av oxalsyra och utfällningen av kalciumoxalat, med hjälp av bleknings- och utfällningsexperiment. Forskningens fokus låg speciellt på olika sätt att förebygga uppkomsten av avlagringar vid tillverkning av trähaltigt papper. Resultaten i denna avhandling visar att bildningen av oxalsyra samt utfällning av kalciumoxalat kan påverkas genom processtekniska och våtändskemiska metoder. Noggrann avbarkning av veden, kontrollerade förhållanden under den alkaliska peroxidblekningen, noggrann hantering och kontroll av andra lösta och kolloidala substanser, samt utnyttjande av skräddarsydd kemi för kontroll av avlagringar är nyckelfaktorer. Resultaten kan utnyttjas då man planerar blekningssekvenser för olika massor samt för att lösa problem orsakade av kalciumoxalat. Forskningsmetoderna som användes i utfällningsstudierna samt för utvärdering av tillsatsmedel kan också utnyttjas inom andra områden, t.ex. bryggeri- och sockerindustrin, där kalciumoxalatproblem är vanligt förekommande. -------------------------------------------- Paperinvalmistusta häiritsevät usein erilaiset epäpuhtaudet, jotka kiinnittyvät prosessipinnoille ja haittaavat tuotantoa sekä paperin laatua. Puun pihkan lisäksi eräs yleinen ongelma on niukkaliukoisten suolojen aiheuttamat kivettymät. Kalkkisaostuma kahvinkeittimessä on esimerkki vastaavasta ongelmasta arkielämässä. Massa- ja paperiteollisuudessa yksi hankalimmista kivettymien muodostajista on kalsiumoksalaatti, koska se on lähes liukenematonta ja sen aiheuttamat saostumat ovat erittäin vaikeasti poistettavia. Kalsiumoksalaatti on yleisesti tunnettu myös munuaiskivien aiheuttajana ihmisillä. Puu ja varsinkin sen kuori sisältää aina jonkin verran oksalaattia, mutta suurempi lähde on kuitenkin oksaalihappo jota muodostuu valkaistaessa massaa hapettavilla kemikaaleilla, kuten vetyperoksidilla. Kalsiumoksalaattia syntyy kun veden, puun ja lisäaineiden mukana prosessiin tuleva kalsium reagoi oksalaatin kanssa. Tässä väitöskirjatyössä tutkittiin oksaalihapon muodostumiseen ja kalsiumoksalaatin saostumiseen vaikuttavia tekijöitä valkaisu- ja saostumiskokeiden avulla. Tutkimuksen painopiste oli saostumien ehkäisemisessä puupitoisten painopaperien valmistuksessa. Työssä saadut tulokset osoittavat että oksaalihapon muodostumiseen ja kalsiumoksalaatin saostumiseen voidaan vaikuttaa sekä prosessiteknisten että märänpään kemian keinojen avulla. Tehokas puun kuorinta, optimoidut olosuhteet peroksidivalkaisussa, muiden liuenneiden ja kolloidisten aineiden hallinta sekä räätälöidyn kemian hyödyntäminen kalsiumoksalaattisaostumien torjunnassa ovat keskeisissä rooleissa ongelmien välttämiseksi. Väitöskirjatyön tuloksia voidaan hyödyntää massan valkaisulinjoja suunniteltaessa sekä kalsiumoksalaatin aiheuttamien ongelmien ratkaisemisessa. Tutkimusmenetelmiä, joita käytettiin saostumiskokeissa ja eri lisäaineiden vaikutusten arvioinnissa, voidaan hyödyntää massa- ja paperiteollisuuden lisäksi myös muilla alueilla, kuten sokeri- ja panimoteollisuudessa, joissa ongelma on myös yleinen.
Resumo:
Research indicates that Obsessive-Compulsive Disorder (OCD; DSM-IV-TR, American Psychiatric Association, 2000) is the second most frequent disorder to coincide with Autism Spectrum Disorder (ASD; Leyfer et aI., 2006). Excessive collecting and hoarding are also frequently reported in children with ASD (Berjerot, 2007). Although functional analysis (Iwata, Dorsey, Slifer, Bauman, & Richman, 1982/1994) has successfully identified maintaining variables for repetitive behaviours such as of bizarre vocalizations (e.g., Wilder, Masuda, O'Connor, & Baham, 2001), tics (e.g., Scotti, Schulman, & Hojnacki, 1994), and habit disorders (e.g., Woods & Miltenberger, 1996), extant literature ofOCD and functional analysis methodology is scarce (May et aI., 2008). The current studies utilized functional analysis methodology to identify the types of operant functions associated with the OCD-related hoarding behaviour of a child with ASD and examined the efficacy of function-based intervention. Results supported hypotheses of automatic and socially mediated positive reinforcement. A corresponding function-based treatment plan incorporated antecedent strategies and differential reinforcement (Deitz, 1977; Lindberg, Iwata, Kahng, and DeLeon, 1999; Reynolds, 1961). Reductions in problem behaviour were evidenced through use of a multiple baseline across behaviours design and maintained during two-month follow-up. Decreases in symptom severity were also discerned through subjective measures of treatment effectiveness.
Resumo:
As the complexity of evolutionary design problems grow, so too must the quality of solutions scale to that complexity. In this research, we develop a genetic programming system with individuals encoded as tree-based generative representations to address scalability. This system is capable of multi-objective evaluation using a ranked sum scoring strategy. We examine Hornby's features and measures of modularity, reuse and hierarchy in evolutionary design problems. Experiments are carried out, using the system to generate three-dimensional forms, and analyses of feature characteristics such as modularity, reuse and hierarchy were performed. This work expands on that of Hornby's, by examining a new and more difficult problem domain. The results from these experiments show that individuals encoded with those three features performed best overall. It is also seen, that the measures of complexity conform to the results of Hornby. Moving forward with only this best performing encoding, the system was applied to the generation of three-dimensional external building architecture. One objective considered was passive solar performance, in which the system was challenged with generating forms that optimize exposure to the Sun. The results from these and other experiments satisfied the requirements. The system was shown to scale well to the architectural problems studied.
Resumo:
Ordered gene problems are a very common classification of optimization problems. Because of their popularity countless algorithms have been developed in an attempt to find high quality solutions to the problems. It is also common to see many different types of problems reduced to ordered gene style problems as there are many popular heuristics and metaheuristics for them due to their popularity. Multiple ordered gene problems are studied, namely, the travelling salesman problem, bin packing problem, and graph colouring problem. In addition, two bioinformatics problems not traditionally seen as ordered gene problems are studied: DNA error correction and DNA fragment assembly. These problems are studied with multiple variations and combinations of heuristics and metaheuristics with two distinct types or representations. The majority of the algorithms are built around the Recentering- Restarting Genetic Algorithm. The algorithm variations were successful on all problems studied, and particularly for the two bioinformatics problems. For DNA Error Correction multiple cases were found with 100% of the codes being corrected. The algorithm variations were also able to beat all other state-of-the-art DNA Fragment Assemblers on 13 out of 16 benchmark problem instances.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
This thesis addresses the problem of developing automatic grasping capabilities for robotic hands. Using a 2-jointed and a 4-jointed nmodel of the hand, we establish the geometric conditions necessary for achieving form closure grasps of cylindrical objects. We then define and show how to construct the grasping pre-image for quasi-static (friction dominated) and zero-G (inertia dominated) motions for sensorless and sensor-driven grasps with and without arm motions. While the approach does not rely on detailed modeling, it is computationally inexpensive, reliable, and easy to implement. Example behaviors were successfully implemented on the Salisbury hand and on a planar 2-fingered, 4 degree-of-freedom hand.