737 resultados para Soft computing
Resumo:
Tutkielmassa analysoidaan kolmen internetsivuston uutisartikkeleita kielitieteen näkökulmasta. Tavoitteena on selvittää esiintyykö internetsivustojen BBC, CNN ja Fox News uutisoinnissa politiikkaan liittyviä ennakkoasenteita tai puolueellisuuksia ja miten ne käytännössä näkyvät uutisartikkeleiden kielessä. Kriittiseen diskurssianalyysiin pohjautuen tutkielma esittelee jokaisen uutissivuston taustaan (esimerkiksi rakenteeseen ja rahoitukseen) liittyviä seikkoja sekä mediadiskurssiin ja politiikkaan liittyvät taustatiedot, jolla taataan Norman Fairclough'n kolmivaiheisen menetelmän mahdollisimman perusteellinen toteuttaminen. Uutissivustoja analysoidaan kriittiselle diskurssianalyysille sopivan funktionaalisen kieliopin ja muiden lingvististen välineiden avulla. Koko aineiston (404 artikkelia) otsikot analysoidaan ensin, minkä jälkeen analysoidaan yhdeksän kokonaista artikkelia kolmeen eri aihealueeseen liittyen niin, että jokaiselta internetsivustolta analysoidaan yksi artikkeli jokaista aihetta kohden. Analyysikeinoina käytetään ensisijaisesti systeemis-funktionaalisen kieliopin tekstuaalisen metafunktion välineitä (thematic structure). Myös ideationaalisen metafunktion välineitä (transitivity), referenssiketjuja (referential identity chains) ja leksikaalista analyysia käytetään hyväksi. Lähtökohtaisesti tavoitteena on analysoida uutissivustoja vertailevasti, jolloin analyysin tulokset ovat paremmin havainnoitavissa ja perusteltavissa. Hypoteesi aikaisempien tutkimusten ja yleisen mielikuvan perusteella on, että CNN uutisoi demokraattipuolueelle ja Fox News taas republikaanipuolueelle edulliseen sävyyn. Tutkimustulokset vaihtelivat hypoteesia tukevista ja sen vastaisista tuloksista niihin, jotka eivät olleet tarpeeksi tuettuja kumpaankaan suuntaan. Vahvimmat tulokset ovat kuitenkin hypoteesia tukevia, joten tässä tutkielmassa todetaan, ettei uutisointi ole puolueetonta ainakaan näiden kolmen internetsivuston kohdalla. Lisäksi muutaman aihealueen kohdalla uutisointi on niin toistuvaa tietystä näkökulmasta, että luonnollistumisteorian mukaista aatteiden luonnollistumista saattaa tapahtua. Tutkielmassa käytettyjen menetelmien menestyksen perusteella suositellaan, että tekstuaalisen metafunktion analyysivälineitä käytetään enemmän. Lisäksi suositellaan meta-analyysin harkitsemista, jotta voitaisiin selvittää, mitkä analyysimetodit parhaiten sopivat minkäkinlaisen aineiston analysointiin.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
When modeling machines in their natural working environment collisions become a very important feature in terms of simulation accuracy. By expanding the simulation to include the operation environment, the need for a general collision model that is able to handle a wide variety of cases has become central in the development of simulation environments. With the addition of the operating environment the challenges for the collision modeling method also change. More simultaneous contacts with more objects occur in more complicated situations. This means that the real-time requirement becomes more difficult to meet. Common problems in current collision modeling methods include for example dependency on the geometry shape or mesh density, calculation need increasing exponentially in respect to the number of contacts, the lack of a proper friction model and failures due to certain configurations like closed kinematic loops. All these problems mean that the current modeling methods will fail in certain situations. A method that would not fail in any situation is not very realistic but improvements can be made over the current methods.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
Smart phones became part and parcel of our life, where mobility provides a freedom of not being bounded by time and space. In addition, number of smartphones produced each year is skyrocketing. However, this also created discrepancies or fragmentation among devices and OSes, which in turn made an exceeding hard for developers to deliver hundreds of similar featured applications with various versions for the market consumption. This thesis is an attempt to investigate whether cloud based mobile development platforms can mitigate and eventually eliminate fragmentation challenges. During this research, we have selected and analyzed the most popular cloud based development platforms and tested integrated cloud features. This research showed that cloud based mobile development platforms may able to reduce mobile fragmentation and enable to utilize single codebase to deliver a mobile application for different platforms.
Resumo:
With a Sales and Operations Planning (S&OP) process, a company aims to manage the demand and supply by planning and forecasting. The studied company uses an integrated S&OP process to improve the company's operations. The aim of this thesis is to develop this business process by finding the best possible way to manage the soft information in S&OP, whilst also understanding the importance and types (assumptions, risks and opportunities) of soft information in S&OP. The soft information in S&OP helps to refine future S&OP planning, taking into account the uncertainties that affect the balance of the long-term demand and supply (typically 12-18 months). The literature review was used to create a framework for soft information management process in S&OP. There were not found a concrete way how to manage soft information in the existing literature. In consequence of the poor literature available the Knowledge Management literature was used as the base for the framework creation, which was seen in the very same type of information management like the soft information management is. The framework created a four-stage process to manage soft information in S&OP that included also the required support systems. First phase is collecting and acquiring soft information in S&OP, which include also categorization. The categorization was the cornerstone to identify different requirements that needs to be taken into consideration when managing soft information in S&OP process. The next phase focus on storing data, which purpose is to ensure the soft information is managed in a common system (support system) in a way that the following phase makes it available to users in S&OP who need by help of sharing and applications process. The last phase target is to use the soft information to understand assumptions and thoughts of users behind the numbers in S&OP plans. With this soft management process the support system will have a key role. The support system, like S&OP tool, ensures that soft information is stored in the right places, kept up-to-date and relevancy. The soft information management process in S&OP strives to improve the relevant soft information documenting behind the S&OP plans into the S&OP support system. The process offers an opportunity to individuals to review, comment and evaluate soft information in S&OP made by their own or others. In the case company it was noticed that without a properly documented and distributed soft information in S&OP it was seen to cause mistrust towards the planning.
Resumo:
Smart home implementation in residential buildings promises to optimize energy usage and save significant amount of energy simply due to a better understanding of user's energy usage profile. Apart from the energy optimisation prospects of this technology, it also aims to guarantee occupants significant amount of comfort and remote control over home appliances both at home locations and at remote places. However, smart home investment just like any other kind of investment requires an adequate measurement and justification of the economic gains it could proffer before its realization. These economic gains could differ for different occupants due to their inherent behaviours and tendencies. Thus it is pertinent to investigate the various behaviours and tendencies of occupants in different domain of interests and to measure the value of the energy savings accrued by smart home implementations in these domains of interest in order to justify such economic gains. This thesis investigates two domains of interests (the rented apartment and owned apartment) for primarily two behavioural tendencies (Finland and Germany) obtained from observation and corroborated by conducted interviews to measure the payback time and Return on Investment (ROI) of their smart home implementations. Also, similar measures are obtained for identified Australian use case. The research finding reveals that building automation for the Finnish behavioural tendencies seems to proffers a better ROI and payback time for smart home implementations.
Resumo:
Edible films based on gluten from four types of Brazilian wheat gluten (2 "semi-hard" and 2 "soft") were prepared and mechanical and barrier properties were compared with those of wheat gluten films with vital gluten. Water vapor, oxygen permeability, tensile strength and percent elongation at break, solubility in water and surface morphology were measured. The films from "semi-hard" wheat flours showed similar water vapor permeability and solubility in water to films from vital gluten and better tensile strength than the films from "soft" and vital gluten. The films from vital gluten had higher elongation at break and oxygen permeability and also lower solubility in water than the films from the Brazilian wheat "soft" flours. In spite of the vital gluten showed greater mechanical resistance, desirable for the bakery products, for the purpose of developing gluten films Brazilian "semi-hard" wheat flours can be used instead of vital gluten, since they showed similar barrier and mechanical properties.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
A necessidade para uma maior produção de carne magra em suínos tem acarretado modificações nas características bioquímicas do músculo conduzindo ao desenvolvimento das anomalias nas suas cores, as denominadas carnes PSE e DFD. Estas carnes por apresentarem alterações de suas propriedades funcionais, resultam em grandes perdas econômicas. Nesse experimento, foram utilizadas 946 amostras de Longissimus dorsi m., lombos de suínos da linhagem Dalland, machos castrados e fêmeas, com 100 dias de idade, em um Frigorífico localizado na Região Sul do País. Os lombos foram classificados com base no valor de L*24 h e pH24 h. Foi constatada uma incidência de 22,8% de carnes PSE, 1,0% de DFD e 76,2% de normal. Esses valores relativamente altos de carnes PSE mostram a necessidade de se realizar controles no seu manejo pré e pós-abate para a manutenção da qualidade da carne e evitar o excessivo prejuízo econômico dos frigoríficos decorrentes dessas anormalidades.
Resumo:
Foodborne disease caused by microorganisms is a problem of public health. Minas soft cheese is a national product manufactured using simple technology; it has high level of acceptance in the country making its production an important economic activity. Many microorganisms may be present in foods including the bacterium Escherichia coli (E. coli). Overall, E. coli is a harmless commensal bacterium; however, some strains may have a pathogenic potential. Several outbreaks of foodborne diseases associated with consumption of contaminated cheese have been reported, and the presence of pathogenic strains of E. coli has increased. The objective of this study was to isolate, evaluate the antimicrobial susceptibility, and characterize, by Multiplex PCR, the pathogenic E. coli strains isolated from Minas cheese commercialized in Rio de Janeiro. Thirty samples were analyzed and five strains of E. coli (EPEC) were identified. The assessment of antimicrobial susceptibility revealed 40% of the isolates resistant to ampicillin and 40% with intermediate resistance to ampicillin-sulbactam combination. These findings are a warning signal to health authorities since Minas cheese is a ready to eat food product, and therefore should not pose health risks to the population.