973 resultados para Froude scaling
Resumo:
Nas últimas décadas, diversas alternativas têm sido propostas para o tratamento do trauma esplênico. O presente estudo procurou comparar o tratamento não-operatório e a cirurgia conservadora na lesão esplênica. Foram analisados, retrospectivamente, os prontuários de 136 portadores de trauma esplênico atendidos na Unidade de Emergência do Hospital das Clínicas da FMRPUSP (1986-1995). Foram utilizados o lnjury Severity Score (1SS) e o Organ lnjury Scaling (OIS) para a definição da gravidade dos casos. Os pacientes foram divididos em dois grupos: grupo A (n=32): conservador não operatório e grupo B (n=104): cirurgia conservadora. As médias de idade, em anos, foram semelhantes (A: 20,31 + 12,43 e B: 25,02 + 14,98; p>0,05). Houve predominância do sexo masculino em ambos os grupos. Os dois grupos diferiram quanto à etiologia (p<0,01). A avaliação das médias do ISS não mostrou diferença significativa (A: 14,21 ± 8,67 e B: 19,44 ± 11,33; p>0,05). Ocorreram complicações em 9,37% e 24,03% dos grupos A e B, respectivamente, mas a diferença não foi significativa (p>0,05). A média de permanência hospitalar foi de 6,68 ± 5,65 e 9,24 ± 9,09 dias, grupos A e B, sem diferença significativa (p>0,05). Concluímos, portanto: o tratamento não-operatório e a cirurgia conservadora do trauma esplênico são condutas equivalentes, sendo opções terapêuticas válidas nas lesões esplênicas de menor gravidade.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Kirjallisuusarvostelu
Resumo:
In the present work, liquid-solid flow in industrial scale is modeled using the commercial software of Computational Fluid Dynamics (CFD) ANSYS Fluent 14.5. In literature, there are few studies on liquid-solid flow in industrial scale, but any information about the particular case with modified geometry cannot be found. The aim of this thesis is to describe the strengths and weaknesses of the multiphase models, when a large-scale application is studied within liquid-solid flow, including the boundary-layer characteristics. The results indicate that the selection of the most appropriate multiphase model depends on the flow regime. Thus, careful estimations of the flow regime are recommended to be done before modeling. The computational tool is developed for this purpose during this thesis. The homogeneous multiphase model is valid only for homogeneous suspension, the discrete phase model (DPM) is recommended for homogeneous and heterogeneous suspension where pipe Froude number is greater than 1.0, while the mixture and Eulerian models are able to predict also flow regimes, where pipe Froude number is smaller than 1.0 and particles tend to settle. With increasing material density ratio and decreasing pipe Froude number, the Eulerian model gives the most accurate results, because it does not include simplifications in Navier-Stokes equations like the other models. In addition, the results indicate that the potential location of erosion in the pipe depends on material density ratio. Possible sedimentation of particles can cause erosion and increase pressure drop as well. In the pipe bend, especially secondary flows, perpendicular to the main flow, affect the location of erosion.
Resumo:
In this work, superconducting YBa2 Cu3O6+x (YBCO) thin films have been studied with the experimental focus on the anisotropy of BaZrO3 (BZO) doped YBCOthin films and the theoretical focus on modelling flux pinning by numerically solving Ginzburg- Landau equations. Also, the structural properties of undoped YBCO thin films grown on NdGaO3 (NGO) and MgO substrates were investigated. The thin film samples were made by pulsed laser ablation on single crystal substrates. The structural properties of the thin films were characterized by X-ray diffraction and atomic force microscope measurements. The superconducting properties were investigated with a magnetometer and also with transport measurements in pulsed magnetic field up to 30 T. Flux pinning was modelled by restricting the value of the order parameter inside the columnar pinning sites and then solving the Ginzburg-Landau equations numerically with the restrictions in place. The computations were done with a parallel code on a supercomputer. The YBCO thin films were seen to develop microcracks when grown on NGO or MgO substrates. The microcrack formation was connected to the structure of the YBCO thin films in both cases. Additionally, the microcracks can be avoided by careful optimization of the deposition parameters and the film thickness. The BZO doping of the YBCO thin films was seen to decrease the effective electron mass anisotropy, which was seen by fitting the Blatter scaling to the angle dependence of the upper critical field. The Ginzburg-Landau simulations were able to reproduce the measured magnetic field dependence of the critical current density for BZO doped and undoped YBCO. The simulations showed that in addition to the large density also the large size of the BZO nanorods is a key factor behind the change in the power law behaviour between BZO doped and undoped YBCO. Additionally, the Ginzburg-Landau equations were solved for type I thin films where giant vortices were seen to appear depending on the film thickness. The simulations predicted that singly quantized vortices are stable in type I films up to quite large thicknesses and that the size of the vortices increases with decreasing film thickness, in a way that is similar to the behaviour of the interaction length of Pearl vortices.
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
Papperstillverkningen störs ofta av oönskade föreningar som kan bilda avsättningar på processytor, vilket i sin tur kan ge upphov till störningar i pappersproduktionen samt försämring av papperskvaliteten. Förutom avsättningar av vedharts är stenliknande avlagringar av svårlösliga salter vanliga. I vårt dagliga liv är kalkavlagringar i kaffe- och vattenkokare exempel på liknande problem. I massa- och pappersindustrin är en av de mest problematiska föreningarna kalciumoxalat; detta salt är nästan olösligt i vatten och avlagringarna är mycket svåra att avlägsna. Kalciumoxalat är också känt som en av orsakerna till njurstenar hos människor. Veden och speciellt barken innehåller alltid en viss mängd oxalat men en större källa är oxalsyra som bildas när massan bleks med oxiderande kemikalier, t.ex. väteperoxid. Kalciumoxalat bildas när oxalsyran reagerar med kalcium som kommer in i processen med råvattnet, veden eller olika tillsatsmedel. I denna avhandling undersöktes faktorer som påverkar bildningen av oxalsyra och utfällningen av kalciumoxalat, med hjälp av bleknings- och utfällningsexperiment. Forskningens fokus låg speciellt på olika sätt att förebygga uppkomsten av avlagringar vid tillverkning av trähaltigt papper. Resultaten i denna avhandling visar att bildningen av oxalsyra samt utfällning av kalciumoxalat kan påverkas genom processtekniska och våtändskemiska metoder. Noggrann avbarkning av veden, kontrollerade förhållanden under den alkaliska peroxidblekningen, noggrann hantering och kontroll av andra lösta och kolloidala substanser, samt utnyttjande av skräddarsydd kemi för kontroll av avlagringar är nyckelfaktorer. Resultaten kan utnyttjas då man planerar blekningssekvenser för olika massor samt för att lösa problem orsakade av kalciumoxalat. Forskningsmetoderna som användes i utfällningsstudierna samt för utvärdering av tillsatsmedel kan också utnyttjas inom andra områden, t.ex. bryggeri- och sockerindustrin, där kalciumoxalatproblem är vanligt förekommande. -------------------------------------------- Paperinvalmistusta häiritsevät usein erilaiset epäpuhtaudet, jotka kiinnittyvät prosessipinnoille ja haittaavat tuotantoa sekä paperin laatua. Puun pihkan lisäksi eräs yleinen ongelma on niukkaliukoisten suolojen aiheuttamat kivettymät. Kalkkisaostuma kahvinkeittimessä on esimerkki vastaavasta ongelmasta arkielämässä. Massa- ja paperiteollisuudessa yksi hankalimmista kivettymien muodostajista on kalsiumoksalaatti, koska se on lähes liukenematonta ja sen aiheuttamat saostumat ovat erittäin vaikeasti poistettavia. Kalsiumoksalaatti on yleisesti tunnettu myös munuaiskivien aiheuttajana ihmisillä. Puu ja varsinkin sen kuori sisältää aina jonkin verran oksalaattia, mutta suurempi lähde on kuitenkin oksaalihappo jota muodostuu valkaistaessa massaa hapettavilla kemikaaleilla, kuten vetyperoksidilla. Kalsiumoksalaattia syntyy kun veden, puun ja lisäaineiden mukana prosessiin tuleva kalsium reagoi oksalaatin kanssa. Tässä väitöskirjatyössä tutkittiin oksaalihapon muodostumiseen ja kalsiumoksalaatin saostumiseen vaikuttavia tekijöitä valkaisu- ja saostumiskokeiden avulla. Tutkimuksen painopiste oli saostumien ehkäisemisessä puupitoisten painopaperien valmistuksessa. Työssä saadut tulokset osoittavat että oksaalihapon muodostumiseen ja kalsiumoksalaatin saostumiseen voidaan vaikuttaa sekä prosessiteknisten että märänpään kemian keinojen avulla. Tehokas puun kuorinta, optimoidut olosuhteet peroksidivalkaisussa, muiden liuenneiden ja kolloidisten aineiden hallinta sekä räätälöidyn kemian hyödyntäminen kalsiumoksalaattisaostumien torjunnassa ovat keskeisissä rooleissa ongelmien välttämiseksi. Väitöskirjatyön tuloksia voidaan hyödyntää massan valkaisulinjoja suunniteltaessa sekä kalsiumoksalaatin aiheuttamien ongelmien ratkaisemisessa. Tutkimusmenetelmiä, joita käytettiin saostumiskokeissa ja eri lisäaineiden vaikutusten arvioinnissa, voidaan hyödyntää massa- ja paperiteollisuuden lisäksi myös muilla alueilla, kuten sokeri- ja panimoteollisuudessa, joissa ongelma on myös yleinen.
Resumo:
Adapting and scaling up agile concepts, which are characterized by iterative, self-directed, customer value focused methods, may not be a simple endeavor. This thesis concentrates on studying challenges in a large-scale agile software development transformation in order to enhance understanding and bring insight into the underlying factors for such emerging challenges. This topic is approached through understanding the concepts of agility and different methods compared to traditional plan-driven processes, complex adaptive theory and the impact of organizational culture on agile transformational efforts. The empirical part was conducted by a qualitative case study approach. The internationally operating software development case organization had a year of experience of an agile transformation effort during it had also undergone organizational realignment efforts. The primary data collection was conducted through semi-structured interviews supported by participatory observation. As a result the identified challenges were categorized under four broad themes: organizational, management, team dynamics and process related. The identified challenges indicate that agility is a multifaceted concept. Agile practices may bring visibility in issues of which many are embedded in the organizational culture or in the management style. Viewing software development as a complex adaptive system could facilitate understanding of the underpinning philosophy and eventually solving the issues: interactions are more important than processes and solving a complex problem, such a novel software development, requires constant feedback and adaptation to changing requirements. Furthermore, an agile implementation seems to be unique in nature, and agents engaged in the interaction are the pivotal part of the success of achieving agility. In case agility is not a strategic choice for whole organization, it seems additional issues may arise due to different ways of working in different parts of an organization. Lastly, detailed suggestions to mitigate the challenges of the case organization are provided.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
The present study examined the floristic composition of three fragments of Araucaria Forest (AF) in the Planalto Catarinense region of southern Brazil as well as the floristic contextualization of these areas in relation to other remnant AF sites. Three AF fragments at different altitudes were analyzed in the municipalities of Campos Novos, Lages, and Painel. Fifty 200 m² plots were examined in each fragment and all of the trees with CBH (circumference at breast height) > 15.7 cm were identified. In order to floristically contextualize the study fragments, comparisons were made with other remnant AF sites by way of dendrograms and NMDS (Non-metric multidimensional scaling). Environmental and spatial variables were plotted on the diagram produced by the NMDS to evaluate their influence on the floristic patterns encountered. The forest fragments studied demonstrated high floristic heterogeneity, indicating that AFs cannot be considered homogeneous formations and they could be classified into 3 phytogeographical categories: i) high altitude areas influenced by cloud cover/fog, including the Painel region; ii) areas of lesser altitude and greater mean annual temperatures situated in the Paraná River basin, and iii) areas situated in the Paraná and Upper-Uruguay river basins and the smaller basins draining directly into the southern Atlantic, near Campos Novos and Lages. The environmental variables most highly correlated with species substitutions among the sites were altitude, mean annual temperature, and the mean temperature of the most humid trimester.
Resumo:
In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.
Resumo:
The performance measurement produces information about the operation of the business process. On the basis of this information performance of the company can be followed and improved. Balanced performance measurement system can monitor performance of several perspectives and business processes can be led according to company strategy. Major part of the costs of a company is originated from purchased goods or services are an output of the buying process emphasising the importance of a reliable performance measurement of purchasing process. In the study, theory of balanced performance measurement is orientated and framework of purchasing process performance measurement system is designed. The designed balanced performance measurement system of purchasing process is tested in case company paying attention to the available data and to other environmental enablers. The balanced purchasing performance measurement system is tested and improved during the test period and attention is paid to the definition and scaling of objectives. Found development initiatives are carried out especially in the scaling of indicators. Finally results of the study are evaluated, conclusions and additional research areas proposed.
Resumo:
Tässä työssä esiteltiin Android laitteisto- ja sovellusalustana sekä kuvattiin, kuinka Android-pelisovelluksen käyttöliittymä voidaan pitää yhtenäisenä eri näyttölaitteilla skaalauskertoimien ja ankkuroinnin avulla. Toisena osiona työtä käsiteltiin yksinkertaisia tapoja, joilla pelisovelluksien suorituskykyä voidaan parantaa. Näistä tarkempiin mittauksiin valittiin matalatarkkuuksinen piirtopuskuri ja näkymättömissä olevien kappaleiden piilotus. Mittauksissa valitut menetelmät vaikuttivat demosovelluksen suorituskykyyn huomattavasti. Tässä työssä rajauduttiin Android-ohjelmointiin Java-kielellä ilman ulkoisia kirjastoja, jolloin työn tuloksia voi helposti hyödyntää mahdollisimman monessa eri käyttökohteessa.
Resumo:
This thesis presents point-contact measurements between superconductors (Nb, Ta, Sn,Al, Zn) and ferromagnets (Co, Fe, Ni) as well as non-magnetic metals (Ag, Au, Cu, Pt).The point contacts were fabricated using the shear method. The differential resistanceof the contacts was measured either in liquid He at 4.2 K or in vacuum in a dilutionrefrigerator at varying temperature down to 0.1 K. The contact properties were investigatedas function of size and temperature. The measured Andreev-reflection spectrawere analysed in the framework of the BTK model – a three parameter model that describescurrent transport across a superconductor - normal conductor interface. Theoriginal BTK model was modified to include the effects of spin polarization or finitelifetime of the Cooper pairs. Our polarization values for the ferromagnets at 4.2 K agree with the literature data, but the analysis was ambiguous because the experimental spectra both with ferromagnets and non-magnets could be described equally well either with spin polarization or finite lifetime effects in the BTK model. With the polarization model the Z parametervaries from almost 0 to 0.8 while the lifetime model produces Z values close to 0.5. Measurements at lower temperatures partly lift this ambiguity because the magnitude of thermal broadening is small enough to separate lifetime broadening from the polarization. The reduced magnitude of the superconducting anomalies for Zn-Fe contacts required an additional modification of the BTK model which was implemented as a scaling factor. Adding this parameter led to reduced polarization values. However, reliable data is difficult to obtain because different parameter sets produce almost identical spectra.