828 resultados para Distributed parameter


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This bachelor’s thesis, written for Lappeenranta University of Technology and implemented in a medium-sized enterprise (SME), examines a distributed document migration system. The system was created to migrate a large number of electronic documents, along with their metadata, from one document management system to another, so as to enable a rapid switchover of an enterprise resource planning systems inside the company. The paper examines, through theoretical analysis, messaging as a possible enabler of distributing applications and how it naturally fits an event based model, whereby system transitions and states are expressed through recorded behaviours. This is put into practice by analysing the implemented migration systems and how the core components, MassTransit, RabbitMQ and MongoDB, were orchestrated together to realize such a system. As a result, the paper presents an architecture for a scalable and distributed system that could migrate hundreds of thousands of documents over weekend, serving its goals in enabling a rapid system switchover.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The capabilities and thus, design complexity of VLSI-based embedded systems have increased tremendously in recent years, riding the wave of Moore’s law. The time-to-market requirements are also shrinking, imposing challenges to the designers, which in turn, seek to adopt new design methods to increase their productivity. As an answer to these new pressures, modern day systems have moved towards on-chip multiprocessing technologies. New architectures have emerged in on-chip multiprocessing in order to utilize the tremendous advances of fabrication technology. Platform-based design is a possible solution in addressing these challenges. The principle behind the approach is to separate the functionality of an application from the organization and communication architecture of hardware platform at several levels of abstraction. The existing design methodologies pertaining to platform-based design approach don’t provide full automation at every level of the design processes, and sometimes, the co-design of platform-based systems lead to sub-optimal systems. In addition, the design productivity gap in multiprocessor systems remain a key challenge due to existing design methodologies. This thesis addresses the aforementioned challenges and discusses the creation of a development framework for a platform-based system design, in the context of the SegBus platform - a distributed communication architecture. This research aims to provide automated procedures for platform design and application mapping. Structural verification support is also featured thus ensuring correct-by-design platforms. The solution is based on a model-based process. Both the platform and the application are modeled using the Unified Modeling Language. This thesis develops a Domain Specific Language to support platform modeling based on a corresponding UML profile. Object Constraint Language constraints are used to support structurally correct platform construction. An emulator is thus introduced to allow as much as possible accurate performance estimation of the solution, at high abstraction levels. VHDL code is automatically generated, in the form of “snippets” to be employed in the arbiter modules of the platform, as required by the application. The resulting framework is applied in building an actual design solution for an MP3 stereo audio decoder application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Denna avhandling handlar om metoder för att hitta begränsningar för det asymptotiska beteendet hos en förväntad uthoppstid från ett område omkring en xpunkt för processer som har normalfördelad störning. I huvudsak behandlas olika typer av autoregressiva processer. Fyra olika metoder används. En metod som använder principen för stora avvikelser samt en metod som jämför uthoppstiden med en återkomsttid ger övre begränsningar för den förväntade uthoppstiden. En martingalmetod och en metod för normalfördelade stokastiska variabler ger undre begränsningar. Metoderna har alla både förtjänster och nackdelar. Genom att kombinera de olika metoderna får man de bästa resultaten. Vi får fram gränsvärdet för det asymptotiska beteendet hos en uthoppstid för den multivariata autoregressiva processen, samt motsvarande gränsvärde för den univariata autoregressiva processen av ordning n.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a critical analysis of the generalized use of the "impact factor". By means of the Kruskal-Wallis test, it was shown that it is not possible to compare distinct disciplines using the impact factor without adjustments. After assigning the median journal the value of one (1.000), the impact factor value for each journal was calculated by the rule of three. The adjusted values were homogeneous, thus permitting comparison among distinct disciplines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To study the dendritic morphology of retinal ganglion cells in wild-type mice we intracellularly injected these cells with Lucifer yellow in an in vitro preparation of the retina. Subsequently, quantified values of dendritic thickness, number of branching points and level of stratification of 73 Lucifer yellow-filled ganglion cells were analyzed by statistical methods, resulting in a classification into 9 groups. The variables dendritic thickness, number of branching points per cell and level of stratification were independent of each other. Number of branching points and level of stratification were independent of eccentricity, whereas dendritic thickness was positively dependent (r = 0.37) on it. The frequency distribution of dendritic thickness tended to be multimodal, indicating the presence of at least two cell populations composed of neurons with dendritic diameters either smaller or larger than 1.8 µm ("thin" or "thick" dendrites, respectively). Three cells (4.5%) were bistratified, having thick dendrites, and the others (95.5%) were monostratified. Using k-means cluster analysis, monostratified cells with either thin or thick dendrites were further subdivided according to level of stratification and number of branching points: cells with thin dendrites were divided into 2 groups with outer stratification (0-40%) and 2 groups with inner (50-100%) stratification, whereas cells with thick dendrites were divided into one group with outer and 3 groups with inner stratification. We postulate, that one group of cells with thin dendrites resembles cat ß-cells, whereas one group of cells with thick dendrites includes cells that resemble cat a-cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Circadian timing is structured in such a way as to receive information from the external and internal environments, and its function is the timing organization of the physiological and behavioral processes in a circadian pattern. In mammals, the circadian timing system consists of a group of structures, which includes the suprachiasmatic nucleus (SCN), the intergeniculate leaflet and the pineal gland. Neuron groups working as a biological pacemaker are found in the SCN, forming a biological master clock. We present here a simple model for the circadian timing system of mammals, which is able to reproduce two fundamental characteristics of biological rhythms: the endogenous generation of pulses and synchronization with the light-dark cycle. In this model, the biological pacemaker of the SCN was modeled as a set of 1000 homogeneously distributed coupled oscillators with long-range coupling forming a spherical lattice. The characteristics of the oscillator set were defined taking into account the Kuramoto's oscillator dynamics, but we used a new method for estimating the equilibrium order parameter. Simultaneous activities of the excitatory and inhibitory synapses on the elements of the circadian timing circuit at each instant were modeled by specific equations for synaptic events. All simulation programs were written in Fortran 77, compiled and run on PC DOS computers. Our model exhibited responses in agreement with physiological patterns. The values of output frequency of the oscillator system (maximal value of 3.9 Hz) were of the order of magnitude of the firing frequencies recorded in suprachiasmatic neurons of rodents in vivo and in vitro (from 1.8 to 5.4 Hz).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the new age of Internet of Things (IoT), object of everyday such as mobile smart devices start to be equipped with cheap sensors and low energy wireless communication capability. Nowadays mobile smart devices (phones, tablets) have become an ubiquitous device with everyone having access to at least one device. There is an opportunity to build innovative applications and services by exploiting these devices’ untapped rechargeable energy, sensing and processing capabilities. In this thesis, we propose, develop, implement and evaluate LoadIoT a peer-to-peer load balancing scheme that can distribute tasks among plethora of mobile smart devices in the IoT world. We develop and demonstrate an android-based proof of concept load-balancing application. We also present a model of the system which is used to validate the efficiency of the load balancing approach under varying application scenarios. Load balancing concepts can be apply to IoT scenario linked to smart devices. It is able to reduce the traffic send to the Cloud and the energy consumption of the devices. The data acquired from the experimental outcomes enable us to determine the feasibility and cost-effectiveness of a load balanced P2P smart phone-based applications.