716 resultados para pacs: mathematical computing
Resumo:
Based on experimental tests, it was obtained the equations for drying, equilibrium moisture content, latent heat of vaporization of water contained in the product and the equation of specific heat of cassava starch pellets, essential parameters for realizing modeling and mathematical simulation of mechanical drying of cassava starch for a new technique proposed, consisting of preformed by pelleting and subsequent artificial drying of starch pellets. Drying tests were conducted in an experimental chamber by varying the air temperature, relative humidity, air velocity and product load. The specific heat of starch was determined by differential scanning calorimetry. The generated equations were validated through regression analysis, finding an appropriate correlation of the data, which indicates that by using these equations, can accurately model and simulate the drying process of cassava starch pellets.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Detta arbete fokuserar på modellering av katalytiska gas-vätskereaktioner som genomförs i kontinuerliga packade bäddar. Katalyserade gas-vätskereaktioner hör till de mest typiska reaktionerna i kemisk industri; därför behandlas här packade bäddreaktorer som ett av de populäraste alternativen, då kontinuerlig drift eftersträvas. Tack vare en stor katalysatormängd per volym har de en kompakt struktur, separering av katalysatorn behövs inte och genom en professionell design kan den mest fördelaktiga strömningsbilden upprätthållas i reaktorn. Packade bäddreaktorer är attraktiva p.g.a. lägre investerings- och driftskostnader. Även om packade bäddar används intensivt i industri, är det mycket utmanande att modellera. Detta beror på att tre faser samexisterar och systemets geometri är komplicerad. Existensen av flera reaktioner gör den matematiska modelleringen även mera krävande. Många förenklingar blir därmed nödvändiga. Modellerna involverar typiskt flera parametrar som skall justeras på basis av experimentella data. I detta arbete studerades fem olika reaktionssystem. Systemen hade studerats experimentellt i vårt laboratorium med målet att nå en hög produktivitet och selektivitet genom ett optimalt val av katalysatorer och driftsbetingelser. Hydrering av citral, dekarboxylering av fettsyror, direkt syntes av väteperoxid samt hydrering av sockermonomererna glukos och arabinos användes som exempelsystem. Även om dessa system hade mycket gemensamt, hade de också unika egenskaper och krävde därför en skräddarsydd matematisk behandling. Citralhydrering var ett system med en dominerande huvudreaktion som producerar citronellal och citronellol som huvudprodukter. Produkterna används som en citrondoftande komponent i parfymer, tvålar och tvättmedel samt som plattform-kemikalier. Dekarboxylering av stearinsyra var ett specialfall, för vilket en reaktionsväg för produktion av långkedjade kolväten utgående från fettsyror söktes. En synnerligen hög produktselektivitet var karakteristisk för detta system. Även processuppskalning modellerades för dekarboxylerings-reaktionen. Direkt syntes av väteperoxid hade som målsättning att framta en förenklad process att producera väteperoxid genom att låta upplöst väte och syre reagera direkt i ett lämpligt lösningsmedel på en aktiv fast katalysator. I detta system förekommer tre bireaktioner, vilka ger vatten som oönskad produkt. Alla dessa tre reaktioner modellerades matematiskt med hjälp av dynamiska massbalanser. Målet med hydrering av glukos och arabinos är att framställa produkter med en hög förädlingsgrad, nämligen sockeralkoholer, genom katalytisk hydrering. För dessa två system löstes ämnesmängd- och energibalanserna simultant för att evaluera effekter inne i porösa katalysatorpartiklar. Impulsbalanser som bestämmer strömningsbetingelser inne i en kemisk reaktor, ersattes i alla modelleringsstudier med semi-empiriska korrelationsuttryck för vätskans volymandel och tryckförlust och med axiell dispersionsmodell för beskrivning av omblandningseffekter. Genom att justera modellens parametrar kunde reaktorns beteende beskrivas väl. Alla experiment var genomförda i laboratorieskala. En stor mängd av kopplade effekter samexisterade: reaktionskinetik inklusive adsorption, katalysatordeaktivering, mass- och värmeöverföring samt strömningsrelaterade effekter. En del av dessa effekter kunde studeras separat (t.ex. dispersionseffekter och bireaktioner). Inverkan av vissa fenomen kunde ibland minimeras genom en noggrann planering av experimenten. På detta sätt kunde förenklingar i modellerna bättre motiveras. Alla system som studerades var industriellt relevanta. Utveckling av nya, förenklade produktionsteknologier för existerande kemiska komponenter eller nya komponenter är ett gigantiskt uppdrag. Studierna som presenterades här fokuserade på en av den teknisk-vetenskapliga utfärdens första etapper.
Resumo:
Problem of modeling of anaesthesia depth level is studied in this Master Thesis. It applies analysis of EEG signals with nonlinear dynamics theory and further classification of obtained values. The main stages of this study are the following: data preprocessing; calculation of optimal embedding parameters for phase space reconstruction; obtaining reconstructed phase portraits of each EEG signal; formation of the feature set to characterise obtained phase portraits; classification of four different anaesthesia levels basing on previously estimated features. Classification was performed with: Linear and quadratic Discriminant Analysis, k Nearest Neighbours method and online clustering. In addition, this work provides overview of existing approaches to anaesthesia depth monitoring, description of basic concepts of nonlinear dynamics theory used in this Master Thesis and comparative analysis of several different classification methods.
Resumo:
A mathematical model is developed for gas-solids flows in circulating fluidized beds. An Eulerian formulation is followed based on the two-fluids model approach where both the fluid and the particulate phases are treated as a continuum. The physical modelling is discussed, including the formulation of boundary conditions and the description of the numerical methodology. Results of numerical simulation are presented and discussed. The model is validated through comparison to experiment, and simulation is performed to investigate the effects on the flow hydrodynamics of the solids viscosity.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
Smart phones became part and parcel of our life, where mobility provides a freedom of not being bounded by time and space. In addition, number of smartphones produced each year is skyrocketing. However, this also created discrepancies or fragmentation among devices and OSes, which in turn made an exceeding hard for developers to deliver hundreds of similar featured applications with various versions for the market consumption. This thesis is an attempt to investigate whether cloud based mobile development platforms can mitigate and eventually eliminate fragmentation challenges. During this research, we have selected and analyzed the most popular cloud based development platforms and tested integrated cloud features. This research showed that cloud based mobile development platforms may able to reduce mobile fragmentation and enable to utilize single codebase to deliver a mobile application for different platforms.
Resumo:
The aim of the present set of longitudinal studies was to explore 3-7-year-old children.s Spontaneous FOcusing on Numerosity (SFON) and its relation to early mathematical development. The specific goals were to capture in method and theory the distinct process by which children focus on numerosity as a part of their activities involving exact number recognition, and individual differences in this process that may be informative in the development of more complex number skills. Over the course of conducting the five studies, fifteen novel tasks were progressively developed for the SFON assessments. In the tasks, confounding effects of insufficient number recognition, verbal comprehension, other procedural skills as well as working memory capacity were aimed to be controlled. Furthermore, how children.s individual differences in SFON are related to their development of number sequence, subitizing-based enumeration, object counting and basic arithmetic skills was explored. The effect of social interaction on SFON was tested. Study I captured the first phase of the 3-year longitudinal study with 39 children. It was investigated whether there were differences in 3-year-old children.s tendency to focus on numerosity, and whether these differences were related to the children.s development of cardinality recognition skills from the age of 3 to 4 years. It was found that the two groups of children formed on the basis of their amount of SFON tendency at the age of 3 years differed in their development of recognising and producing small numbers. The children whose SFON tendency was very predominant developed faster in cardinality related skills from the age of 3 to 4 years than the children whose SFON tendency was not as predominant. Thus, children.s development in cardinality recognition skills is related to their SFON tendency. Studies II and III were conducted to investigate, firstly, children.s individual differences in SFON, and, secondly, whether children.s SFON is related to their counting development. Altogether nine tasks were designed for the assessments of spontaneous and guided focusing on numerosity. The longitudinal data of 39 children in Study II from the age of 3.5 to 6 years showed individual differences in SFON at the ages of 4, 5 and 6 years, as well as stability in children.s SFON across tasks used at different ages. The counting skills were assessed at the ages of 3.5, 5 and 6 years. Path analyses indicated a reciprocal tendency in the relationship between SFON and counting development. In Study III, these results on the individual differences in SFON tendency, the stability of SFON across different tasks and the relationship of SFON and mathematical skills were confirmed by a larger-scale cross-sectional study of 183 on average 6.5-year-old children (range 6;0-7;0 years). The significant amount of unique variance that SFON accounted for number sequence elaboration, object counting and basic arithmetic skills stayed statistically significant (partial correlations varying from .27 to .37) when the effects of non-verbal IQ and verbal comprehension were controlled. In addition, to confirm that the SFON tasks assess SFON tendency independently from enumeration skills, guided focusing tasks were used for children who had failed in SFON tasks. It was explored whether these children were able to proceed in similar tasks to SFON tasks once they were guided to focus on number. The results showed that these children.s poor performance in the SFON tasks was not caused by their deficiency in executing the tasks but on lacking focusing on numerosity. The longitudinal Study IV of 39 children aimed at increasing the knowledge of associations between children.s long-term SFON tendency, subitizing-based enumeration and verbal counting skills. Children were tested twice at the age of 4-5 years on their SFON, and once at the age of 5 on their subitizing-based enumeration, number sequence production, as well as on their skills for counting of objects. Results showed considerable stability in SFON tendency measured at different ages, and that there is a positive direct association between SFON and number sequence production. The association between SFON and object counting skills was significantly mediated by subitizing-based enumeration. These results indicate that the associations between the child.s SFON and sub-skills of verbal counting may differ on the basis of how significant a role understanding the cardinal meanings of number words plays in learning these skills. The specific goal of Study V was to investigate whether it is possible to enhance 3-year old children.s SFON tendency, and thus start children.s deliberate practice in early mathematical skills. Participants were 3-year-old children in Finnish day care. The SFON scores and cardinality-related skills of the experimental group of 17 children were compared to the corresponding results of the 17 children in the control group. The results show an experimental effect on SFON tendency and subsequent development in cardinality-related skills during the 6-month period from pretest to delayed posttest in the children with some initial SFON tendency in the experimental group. Social interaction has an effect on children.s SFON tendency. The results of the five studies assert that within a child.s existing mathematical competence, it is possible to distinguish a separate process, which refers to the child.s tendency to spontaneously focus on numerosity. Moreover, there are significant individual differences in children.s SFON at the age of 3-7 years. Moderate stability was found in this tendency across different tasks assessed both at the same and at different ages. Furthermore, SFON tendency is related to the development of early mathematical skills. Educational implications of the findings emphasise, first, the importance of regarding focusing on numerosity as a separate, essential process in the assessments of young children.s mathematical skills. Second, the substantial individual differences in SFON tendency during the childhood years suggest that uncovering and modeling this kind of mathematically meaningful perceiving of the surroundings and tasks could be an efficient tool for promoting young children.s mathematical development, and thus prevent later failures in learning mathematical skills. It is proposed to consider focusing on numerosity as one potential sub-process of activities involving exact number recognition in future studies.
Resumo:
Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.