995 resultados para Scaling process


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled – replicated or deleted, to accommodate the user’s load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem’s knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Power dissipation and tolerance to process variations pose conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for process tolerance can be detrimental for power dissipation. However, for certain signal processing systems such as those used in color image processing, we noted that effective trade-offs can be achieved between Vdd scaling, process tolerance and "output quality". In this paper we demonstrate how these tradeoffs can be effectively utilized in the development of novel low-power variation tolerant architectures for color interpolation. The proposed architecture supports a graceful degradation in the PSNR (Peak Signal to Noise Ratio) under aggressive voltage scaling as well as extreme process variations in. sub-70nm technologies. This is achieved by exploiting the fact that some computations are more important and contribute more to the PSNR improvement compared to the others. The computations are mapped to the hardware in such a way that only the less important computations are affected by Vdd-scaling and process variations. Simulation results show that even at a scaled voltage of 60% of nominal Vdd value, our design provides reasonable image PSNR with 69% power savings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate intensity variations and energy deposition in five coronal loops in active region cores. These were selected for their strong variability in the AIA/SDO 94 Å intensity channel. We isolate the hot Fe XVIII and Fe XXI components of the 94 Å and 131 Å by modeling and subtracting the "warm" contributions to the emission. HMI/SDO data allow us to focus on "inter-moss" regions in the loops. The detailed evolution of the inter-moss intensity time series reveals loops that are impulsively heated in a mode compatible with a nanoflare storm, with a spike in the hot 131 Å signals leading and the other five EUV emission channels following in progressive cooling order. A sharp increase in electron temperature tends to follow closely after the hot 131 Å signal confirming the impulsive nature of the process. A cooler process of growing emission measure follows more slowly. The Fourier power spectra of the hot 131 Å signals, when averaged over the five loops, present three scaling regimes with break frequencies near 0.1 min–1 and 0.7 min–1. The low frequency regime corresponds to 1/f noise; the intermediate indicates a persistent scaling process and the high frequencies show white noise. Very similar results are found for the energy dissipation in a 2D "hybrid" shell model of loop magneto-turbulence, based on reduced magnetohydrodynamics, that is compatible with nanoflare statistics. We suggest that such turbulent dissipation is the energy source for our loops

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Facing an aging society, where there is a large gap between generations and where the elderly are neglected, the Portuguese Red Cross – Delegation of Vila Nova de Gaia – created the project A+: grandparents at school, which encourages intergenerational work. After observing the positive results of the pilot project, the A+ team decided that the project has the potential to be scaled at a national level to enable it to contribute to the integration of the older people in the society, as well as for positive changes of individuals with regard to the elderly. This work project proposes a method to determine whether A+ can be scaled and what is the most efficient way to do it, establishing a process of scale that can be used by every non-profit organization

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To continuously improve the performance of metal-oxide-semiconductor field-effect-transistors (MOSFETs), innovative device architectures, gate stack engineering and mobility enhancement techniques are under investigation. In this framework, new physics-based models for Technology Computer-Aided-Design (TCAD) simulation tools are needed to accurately predict the performance of upcoming nanoscale devices and to provide guidelines for their optimization. In this thesis, advanced physically-based mobility models for ultrathin body (UTB) devices with either planar or vertical architectures such as single-gate silicon-on-insulator (SOI) field-effect transistors (FETs), double-gate FETs, FinFETs and silicon nanowire FETs, integrating strain technology and high-κ gate stacks are presented. The effective mobility of the two-dimensional electron/hole gas in a UTB FETs channel is calculated taking into account its tensorial nature and the quantization effects. All the scattering events relevant for thin silicon films and for high-κ dielectrics and metal gates have been addressed and modeled for UTB FETs on differently oriented substrates. The effects of mechanical stress on (100) and (110) silicon band structures have been modeled for a generic stress configuration. Performance will also derive from heterogeneity, coming from the increasing diversity of functions integrated on complementary metal-oxide-semiconductor (CMOS) platforms. For example, new architectural concepts are of interest not only to extend the FET scaling process, but also to develop innovative sensor applications. Benefiting from properties like large surface-to-volume ratio and extreme sensitivity to surface modifications, silicon-nanowire-based sensors are gaining special attention in research. In this thesis, a comprehensive analysis of the physical effects playing a role in the detection of gas molecules is carried out by TCAD simulations combined with interface characterization techniques. The complex interaction of charge transport in silicon nanowires of different dimensions with interface trap states and remote charges is addressed to correctly reproduce experimental results of recently fabricated gas nanosensors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In der vorliegenden Dissertation wurden zwei verschiedene Fragestellungen bearbeitet. Zum einen wurde im Rahmen des Schwerpunktprojektes „Kolloidverfahrenstechnik“ und in Zusammenarbeit mit der Arbeitsgruppe von Prof. Dr. Heike Schuchmann vom KIT in Karlsruhe die Verkapselung von Silika-Nanopartikeln in eine PMMA-Hülle durch Miniemulsionspolymerisation entwickelt und der Aufskalierungsprozess unter Verwendung von Hochdruckhomogenisatoren vorangetrieben. Zum anderen wurden verschiedene fluorierte Nanopartikel durch den Miniemulsionsprozess generiert und ihr Verhalten in Zellen untersucht.rnSilika-Partikel konnten durch Miniemulsionspolymerisation in zwei unterschiedlichen Prozessen erfolgreich verkapselt werden. Bei der ersten Methode wurden zunächst modifizierte Silika-Partikel in einer MMA-Monomerphase dispergiert und anschließend durch den normalen Miniemulsionsprozess Silika-beladene Tröpfchen generiert. Diese konnten zu Komposit-Partikeln polymerisiert werden. Bei der Verkapselung durch den Fission/Fusion Prozess wurden die hydrophobisierten Silika-Partikel durch Fission und Fusion Prozesse in schon vorhandene Monomertröpfchen eingebracht, welche hinterher polymerisiert wurden. Um hydrophiles Silika in einem hydrophoben Monomer zu dispergieren, musste zunächst eine Modifizierung der Silika-Partikel stattfinden. Dies geschah unter anderem über eine chemische Anbindung von 3-Methacryloxypropyltri-methoxysilan an der Oberfläche der Silika-Partikel. Des Weiteren wurden die hydrophilen Silika-Partikel durch Adsorption von CTMA-Cl physikalisch modifiziert. Unter anderem durch die Variation des Verkapselungsprozesses, der Silika-Menge, der Tensidart und –menge und der Comonomere konnten Komposit-Partikel mit unterschiedlichen Morphologien, Größen, und Füllgraden erhalten werden.rnFluorierte Nanopartikel wurden erfolgreich über den Prozess der Miniemulsionspolymerisation synthetisiert. Als Monomere dienten dabei fluorierte Acrylate, fluorierte Methacrylate und fluoriertes Styrol. Es war möglich aus jeder dieser drei Gruppen an Monomeren fluorierte Nanopartikel herzustellen. Für genauere Untersuchungen wurden 2,3,4,5,6-Pentafluorstyrol, 3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,10-Heptadecafluorodecyl-methacrylat und 1H,1H,2H,2H-Perfluorodecylacrylat als Monomere ausgewählt. Als Hydrophob zur Unterdrückung der Ostwaldreifung wurde Perfluromethyldecalin eingesetzt. Die stabilsten Miniemulsionen wurden wiederum mit den ionischen Tensid SDS generiert. Mit steigendem Gehalt an SDS gelöst in der kontinuierlichen Phase, wurde eine Verkleinerung der Partikelgröße festgestellt. Neben den Homopolymerpartikeln wurden auch Copolymerpartikel mit Acrylsäure erfolgreich synthetisiert. Zudem wurde noch das Verhalten der fluorierten Partikel in Zellen überprüft. Die fluorierten Partikel wiesen ein nicht toxisches Verhalten vor. Die Adsorption von Proteinen aus Humanem Serum wurde über ITC Messungen untersucht. rnSomit konnte gezeigt werden, dass die Technik der Miniemulsionspolymerisation eine abwechslungsreiche und effektive Methode ist, um Hybridnanopartikel mit verschiedenen Morphologien und oberflächenfunktionalisierte Nanopartikel erfolgreich zu generieren.rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurde der nachwachsende Rohstoff Weizenstroh für die Produktion des Biopolymers Polyhydroxybuttersäure genutzt. Als Lignocellulose enthält Weizenstroh einen hohen Anteil an Glucose und Xylose in Form von Cellulose und Hemicellulose. Eine Gewinnung ist aufgrund der komplexen Struktur mit Lignin als dritte Hauptkomponente nur durch eine Vorbehandlung möglich. Hierzu wurde ein thermochemisches Vorbehandlungsverfahren im halbtechnischen (125 l Reaktor) und technisch (425 l Reaktor) Maßstab mit verdünnter Salpetersäure (bis 1 % v/v) etabliert und hinsichtlich verschiedener Versuchsparameter (Behandlungstemperatur, Säure-Konzentration, etc.) optimiert. Auf eine mechanische Vorbehandlung wurde verzichtet. Danach erfolgte eine enzymatische Hydrolyse der vorbehandelten Biomasse. Der PHB-Produzent Cupriavidus necator DSM 545 wurde eingesetzt, um aus den freigesetzten Zuckern PHB zu synthetisieren. rnDurch die Optimierung der Vorbehandlung konnten bis zu 90 % der Glucose und 82 % der Xylose nach der enzymatischen Hydrolyse aus dem Stroh als Monomere und Oligomere freigesetzt werden. Außerdem wurde eine erfolgreiche Überführung des Vorbehandlungsprozesses in den 425 l Reaktor demonstriert. In den gewonnenen Zucker-Hydrolysaten konnten hohe Zelldichten und PHB-Gehalte mit bis zu 38 % erreicht werden. Eine vorherige kostenintensive Reinigung der Hydrolysate war nicht nötig. Zusätzlich konnte gezeigt werden, dass die Reststoffe nach der enzymatischen Hydrolyse, Zellkultur und PHB-Extraktion ausreichendes Potential für eine Biogas-Produktion besitzen. rn

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Inference for latent feature models is inherently difficult as the inference space grows exponentially with the size of the input data and number of latent features. In this work, we use Kurihara & Welling (2008)'s maximization-expectation framework to perform approximate MAP inference for linear-Gaussian latent feature models with an Indian Buffet Process (IBP) prior. This formulation yields a submodular function of the features that corresponds to a lower bound on the model evidence. By adding a constant to this function, we obtain a nonnegative submodular function that can be maximized via a greedy algorithm that obtains at least a one-third approximation to the optimal solution. Our inference method scales linearly with the size of the input data, and we show the efficacy of our method on the largest datasets currently analyzed using an IBP model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The year is 2015 and the startup and tech business ecosphere has never seen more activity. In New York City alone, the tech startup industry is on track to amass $8 billion dollars in total funding – the highest in 7 years (CB Insights, 2015). According to the Kauffman Index of Entrepreneurship (2015), this figure represents just 20% of the total funding in the United States. Thanks to platforms that link entrepreneurs with investors, there are simply more funding opportunities than ever, and funding can be initiated in a variety of ways (angel investors, venture capital firms, crowdfunding). And yet, in spite of all this, according to Forbes Magazine (2015), nine of ten startups will fail. Because of the unpredictable nature of the modern tech industry, it is difficult to pinpoint exactly why 90% of startups fail – but the general consensus amongst top tech executives is that “startups make products that no one wants” (Fortune, 2014). In 2011, author Eric Ries wrote a book called The Lean Startup in attempts to solve this all-too-familiar problem. It was in this book where he developed the framework for The Hypothesis-Driven Entrepreneurship Process, an iterative process that aims at proving a market before actually launching a product. Ries discusses concepts such as the Minimum Variable Product, the smallest set of activities necessary to disprove a hypothesis (or business model characteristic). Ries encourages acting briefly and often: if you are to fail, then fail fast. In today’s fast-moving economy, an entrepreneur cannot afford to waste his own time, nor his customer’s time. The purpose of this thesis is to conduct an in-depth of analysis of Hypothesis-Driven Entrepreneurship Process, in order to test market viability of a reallife startup idea, ShowMeAround. This analysis will follow the scientific Lean Startup approach; for the purpose of developing a functional business model and business plan. The objective is to conclude with an investment-ready startup idea, backed by rigorous entrepreneurial study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unsteady natural convection inside a triangular cavity subject to a non-instantaneous heating on the inclined walls in the form of an imposed temperature which increases linearly up to a prescribed steady value over a prescribed time is reported. The development of the flow from start-up to a steady-state has been described based on scaling analyses and direct numerical simulations. The ramp temperature has been chosen in such a way that the boundary layer is reached a quasi-steady mode before the growth of the temperature is completed. In this mode the thermal boundary layer at first grows in thickness, then contracts with increasing time. However, if the imposed wall temperature growth period is sufficiently short, the boundary layer develops differently. It is seen that the shape of many houses are isosceles triangular cross-section. The heat transfer process through the roof of the attic-shaped space should be well understood. Because, in the building energy, one of the most important objectives for design and construction of houses is to provide thermal comfort for occupants. Moreover, in the present energy-conscious society it is also a requirement for houses to be energy efficient, i.e. the energy consumption for heating or air-conditioning houses must be minimized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The unsteady natural convection boundary layer adjacent to an instantaneously heated inclined plate is investigated using an improved scaling analysis and direct numerical simulations. The development of the unsteady natural convection boundary layer following instantaneous heating may be classified into three distinct stages including a start-up stage, a transitional stage and a steady state stage, which can be clearly identified in the analytical and numerical results. Major scaling relations of the velocity and thicknesses and the flow development time of the natural convection boundary layer are obtained using triple-layer integral solutions and verified by direct numerical simulations over a wide range of flow parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a scaling method for templating digital radiographs using conventional acetate templates independent of template magnification without the need for a calibration marker. The mean magnification factor for the radiology department was determined (119.8%, range117%-123.4%). This fixed magnification factor was used to scale the radiographs by the method described. 32 femoral heads on postoperative THR radiographs were then measured and compared to the actual size. The mean absolute accuracy was within 0.5% of actual head size (range 0 to 3%) with a mean absolute difference of 0.16mm (range 0-1mm, SD 0.26mm). Intraclass Correlation Coefficient (ICC) showed excellent reliability for both inter and intraobserver measurements with ICC scores of 0.993 (95% CI 0.988-0.996) for interobserver measurements and intraobserver measurements ranging between 0.990-0.993 (95% CI 0.980-0.997).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A scaling analysis for the natural convection boundary layer adjacent to an inclined semi-infinite plate subject to a non-instantaneous heating in the form of an imposed wall temperature which increases linearly up to a prescribed steady value over a prescribed time is reported. The development of the boundary layer flow from start-up to a steady-state has been described based on scaling analyses and verified by numerical simulations. The analysis reveals that, if the period of temperature growth on the wall is sufficiently long, the boundary layer reaches a quasi-steady mode before the growth of the temperature is completed. In this mode the thermal boundary layer at first grows in thickness and then contracts with increasing time. However, if the imposed wall temperature growth period is sufficiently short, the boundary layer develops differently, but after the wall temperature growth is completed, the boundary layer develops as though the startup had been instantaneous. The steady state values of the boundary layer for both cases are ultimately the same.