977 resultados para Computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The subject being analyzed of this Master’s Thesis is a development of a service that is used to define a current location of a mobile device. The service utilized data that is obtained from own GPS receiver in some possible cases and as well data from mobile devices which can be afforded for the current environment for acquisition of more precise position of the device. The computation environment is based on context of a mobile device. The service is implemented as an application for communicator series Nokia N8XX. The Master’s Thesis presents theoretical concept of the method and its practical implementation, architecture of the application, requirements and describes a process of its functionality. Also users’ work with application is presented and recommendations for possible future improvements are made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työn päätavoitteena oli kohdeyrityksen kustannuslaskennan kehittäminen, jota varten selvitettiin toimintojen todelliset kustannukset sekä rakennettiin uusi taulukkolaskentaan perustuva hinnoittelumalli. Todellisten kustannukset selvitettiin toimintolaskennan avulla. Yrityksen aiempi kustannuslaskenta perustui perinteiseen lisäyslaskentaan. Työ jakaantui kahteen vaiheeseen: yrityksen kustannuslaskennan nykytilaselvitykseen ja toimintolaskennan toteuttamiseen. Ensimmäisen vaiheen teoriaosuudessa esiteltiin perinteisen kustannuslaskennan ja toimintolaskennan menetelmät sekä vertailtiin niitä keskenään. Empiriaosuudessa käsiteltiin yrityksen kustannusrakenne, tuotekustannuslaskenta, hinnoitteluprosessi ja eri hinnoittelukohteet. Nykytilaselvityksen perusteella laadittiin lista nykyisen kustannuslaskennan ja hinnoittelun kehitettävistä asioista. Kehittäminen päätettiin toteuttaa toimintolaskennan avulla. Toisessa vaiheessa esiteltiin toimintolaskennan toteuttamiseen ja käyttöönottoon liittyvä teoria. Tämän jälkeen suoritettiin toimintokustannusten laskeminen ja uuden hinnoittelumallin rakentaminen. Hinnoittelumallissa haettiin nopeutta uudella materiaalinlaskentatavalla. Työn tuloksina havaittiin, että toteutuneet kustannukset erosivat monen toiminnon kohdalla lisäyslaskennalla lasketuista kustannuksista ja tämä oli vääristänyt tuotteiden hinnoittelua. Toimintolaskennan käyttöönotolla yrityksen kustannuslaskenta ja tuotehinnoittelu saatettiin vastaamaan todellisia kustannuksia. Hinnoittelun nopeutumisella saavutettiin merkittäviä kustannussäästöjä.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cellular automata are models for massively parallel computation. A cellular automaton consists of cells which are arranged in some kind of regular lattice and a local update rule which updates the state of each cell according to the states of the cell's neighbors on each step of the computation. This work focuses on reversible one-dimensional cellular automata in which the cells are arranged in a two-way in_nite line and the computation is reversible, that is, the previous states of the cells can be derived from the current ones. In this work it is shown that several properties of reversible one-dimensional cellular automata are algorithmically undecidable, that is, there exists no algorithm that would tell whether a given cellular automaton has the property or not. It is shown that the tiling problem of Wang tiles remains undecidable even in some very restricted special cases. It follows that it is undecidable whether some given states will always appear in computations by the given cellular automaton. It also follows that a weaker form of expansivity, which is a concept of dynamical systems, is an undecidable property for reversible one-dimensional cellular automata. It is shown that several properties of dynamical systems are undecidable for reversible one-dimensional cellular automata. It shown that sensitivity to initial conditions and topological mixing are undecidable properties. Furthermore, non-sensitive and mixing cellular automata are recursively inseparable. It follows that also chaotic behavior is an undecidable property for reversible one-dimensional cellular automata.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bakgrunden och inspirationen till föreliggande studie är tidigare forskning i tillämpningar på randidentifiering i metallindustrin. Effektiv randidentifiering möjliggör mindre säkerhetsmarginaler och längre serviceintervall för apparaturen i industriella högtemperaturprocesser, utan ökad risk för materielhaverier. I idealfallet vore en metod för randidentifiering baserad på uppföljning av någon indirekt variabel som kan mätas rutinmässigt eller till en ringa kostnad. En dylik variabel för smältugnar är temperaturen i olika positioner i väggen. Denna kan utnyttjas som insignal till en randidentifieringsmetod för att övervaka ugnens väggtjocklek. Vi ger en bakgrund och motivering till valet av den geometriskt endimensionella dynamiska modellen för randidentifiering, som diskuteras i arbetets senare del, framom en flerdimensionell geometrisk beskrivning. I de aktuella industriella tillämpningarna är dynamiken samt fördelarna med en enkel modellstruktur viktigare än exakt geometrisk beskrivning. Lösningsmetoder för den s.k. sidledes värmeledningsekvationen har många saker gemensamt med randidentifiering. Därför studerar vi egenskaper hos lösningarna till denna ekvation, inverkan av mätfel och något som brukar kallas förorening av mätbrus, regularisering och allmännare följder av icke-välställdheten hos sidledes värmeledningsekvationen. Vi studerar en uppsättning av tre olika metoder för randidentifiering, av vilka de två första är utvecklade från en strikt matematisk och den tredje från en mera tillämpad utgångspunkt. Metoderna har olika egenskaper med specifika fördelar och nackdelar. De rent matematiskt baserade metoderna karakteriseras av god noggrannhet och låg numerisk kostnad, dock till priset av låg flexibilitet i formuleringen av den modellbeskrivande partiella differentialekvationen. Den tredje, mera tillämpade, metoden kännetecknas av en sämre noggrannhet förorsakad av en högre grad av icke-välställdhet hos den mera flexibla modellen. För denna gjordes även en ansats till feluppskattning, som senare kunde observeras överensstämma med praktiska beräkningar med metoden. Studien kan anses vara en god startpunkt och matematisk bas för utveckling av industriella tillämpningar av randidentifiering, speciellt mot hantering av olinjära och diskontinuerliga materialegenskaper och plötsliga förändringar orsakade av “nedfallande” väggmaterial. Med de behandlade metoderna förefaller det möjligt att uppnå en robust, snabb och tillräckligt noggrann metod av begränsad komplexitet för randidentifiering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tutkimuksen tarkoituksena on perehtyä suomalaisten yritysten investointitoimintaan ja tutkia miten vuoden 2009 finanssikriisi on siihen vaikuttanut. Tutkimuksessa selvitetään myös yritysten investointiaktiivisuuteen vaikuttavia tekijöitä, investointiprosessia sekä yritysten käyttämiä investointien laskenta- ja riskimenetelmiä. Tutkimuksen teoreettinen viitekehys on rakennettu laskentatoimen investointikirjallisuuden pohjalta. Tutkimuksen empiirinen osuus koostuu sähkönsiirto- ja tilintarkastusyritysten johtohenkilöiden haastatteluista. Tutkimus osoitti, että suuria investointeja tekevät yritykset jatkavat investoimistaan aktiivisesti laman aikana, toisin kuin pieniä investointeja tekevät. Tutkimuksessa selvisi myös, että kohdeyritysten käytetyimmät investointien laskentamenetelmät ovat takaisinmaksuaika ja vaihtoehtolaskelmat. Tosin investointien riskin huomioiminen osoittautui alhaiseksi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing enables on-demand network access to shared resources (e.g., computation, networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort. Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the data centers. Software as a service (SaaS) is part of cloud computing. It is one of the cloud service models. SaaS is software deployed as a hosted service and accessed over the Internet. In SaaS, the consumer uses the provider‘s applications running in the cloud. SaaS separates the possession and ownership of software from its use. The applications can be accessed from any device through a thin client interface. A typical SaaS application is used with a web browser based on monthly pricing. In this thesis, the characteristics of cloud computing and SaaS are presented. Also, a few implementation platforms for SaaS are discussed. Then, four different SaaS implementation cases and one transformation case are deliberated. The pros and cons of SaaS are studied. This is done based on literature references and analysis of the SaaS implementations and the transformation case. The analysis is done both from the customer‘s and service provider‘s point of view. In addition, the pros and cons of on-premises software are listed. The purpose of this thesis is to find when SaaS should be utilized and when it is better to choose a traditional on-premises software. The qualities of SaaS bring many benefits both for the customer as well as the provider. A customer should utilize SaaS when it provides cost savings, ease, and scalability over on-premises software. SaaS is reasonable when the customer does not need tailoring, but he only needs a simple, general-purpose service, and the application supports customer‘s core business. A provider should utilize SaaS when it offers cost savings, scalability, faster development, and wider customer base over on-premises software. It is wise to choose SaaS when the application is cheap, aimed at mass market, needs frequent updating, needs high performance computing, needs storing large amounts of data, or there is some other direct value from the cloud infrastructure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study, feature selection in classification based problems is highlighted. The role of feature selection methods is to select important features by discarding redundant and irrelevant features in the data set, we investigated this case by using fuzzy entropy measures. We developed fuzzy entropy based feature selection method using Yu's similarity and test this using similarity classifier. As the similarity classifier we used Yu's similarity, we tested our similarity on the real world data set which is dermatological data set. By performing feature selection based on fuzzy entropy measures before classification on our data set the empirical results were very promising, the highest classification accuracy of 98.83% was achieved when testing our similarity measure to the data set. The achieved results were then compared with some other results previously obtained using different similarity classifiers, the obtained results show better accuracy than the one achieved before. The used methods helped to reduce the dimensionality of the used data set, to speed up the computation time of a learning algorithm and therefore have simplified the classification task

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this master thesis was to perform simulations that involve use of random number while testing hypotheses especially on two samples populations being compared weather by their means, variances or Sharpe ratios. Specifically, we simulated some well known distributions by Matlab and check out the accuracy of an hypothesis testing. Furthermore, we went deeper and check what could happen once the bootstrapping method as described by Effrons is applied on the simulated data. In addition to that, one well known RobustSharpe hypothesis testing stated in the paper of Ledoit and Wolf was applied to measure the statistical significance performance between two investment founds basing on testing weather there is a statistically significant difference between their Sharpe Ratios or not. We collected many literatures about our topic and perform by Matlab many simulated random numbers as possible to put out our purpose; As results we come out with a good understanding that testing are not always accurate; for instance while testing weather two normal distributed random vectors come from the same normal distribution. The Jacque-Berra test for normality showed that for the normal random vector r1 and r2, only 94,7% and 95,7% respectively are coming from normal distribution in contrast 5,3% and 4,3% failed to shown the truth already known; but when we introduce the bootstrapping methods by Effrons while estimating pvalues where the hypothesis decision is based, the accuracy of the test was 100% successful. From the above results the reports showed that bootstrapping methods while testing or estimating some statistics should always considered because at most cases the outcome are accurate and errors are minimized in the computation. Also the RobustSharpe test which is known to use one of the bootstrapping methods, studentised one, were applied first on different simulated data including distribution of many kind and different shape secondly, on real data, Hedge and Mutual funds. The test performed quite well to agree with the existence of statistical significance difference between their Sharpe ratios as described in the paper of Ledoit andWolf.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Waste incineration plants are increasingly established in China. A low heating value and high moisture content, due to a large proportion of biowaste in the municipal solid waste (MSW), can be regarded as typical characteristics of Chinese MSW. Two incineration technologies have been mainly established in China: stoker grate and circular fluidized bed (CFB). Both of them are designed to incinerate mixed MSW. However, there have been difficulties to reach the sufficient temperature in the combustion process due to the low heating value of the MSW. That is contributed to the usage of an auxiliary fossil fuel, which is often used during the whole incineration process. The objective of this study was to design alternative Waste-to-energy (WTE) scenarios for existing WTE plants with the aim to improve the material and energy efficiency as well as the feasibility of the plants. Moreover, the aim of this thesis was to find the key factors that affect to the feasibility of the scenarios. Five different WTE plants were selected as study targets. The necessary data for calculation was gained from literature as well as received from the operators of the target WTE plants. The created scenarios were based on mechanical-biological treatment (MBT) technologies, in which the produced solid recovered fuel (SRF) was fed as an auxiliary fuel into a WTE plant replacing the fossil fuel. The mechanically separated biowaste was treated either in an anaerobic digestion (AD) plant, a biodrying plant, a thermal drying plant, or a combined AD plant + thermal drying plant. An interactive excel spreadsheet based computation tool was designed to estimate the viability of the scenarios in different WTE cases. The key figures of the improved material and energy efficiency, such as additional electricity generated and avoided waste for landfill, were got as results. Furthermore, economic indicators such as annual profits (or costs), payback period, and internal rate of return (IRR) were gained as results. The results show that the AD scenario was the most profitable in most of the cases. The current heating value of MSW and the tipping fee for the received MSW appeared as the most important factor in terms of feasibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Olemassa olevat spektrieromittarit eivät vastaa riittävästi CIEDE2000-värieroa. Tämän työn tavoitteena oli toteuttaa menetelmä, joka laskee värispektrien eron siten, että tulos vastaa CIEDE2000-värieroa. Kehitystyön tuloksena syntyi menetelmä, joka perustuu ennalta laskettuihin eroihin tunnettujen spektrien välillä ja niiden perusteella johdettuihin laskentaparametreihin. menetelmällä pystyy laskemaan spektrieroja vain niiden spektrien välillä, jotka saadaan sekoittamalla tunnettuja spektrejä. Laskentaparametrien laskenta on työläs prosessi ja siksi menetelmään toteutettiin hajautus usealle tietokoneelle. Menetelmä saatiin vastaamaan hyvin CIEDE2000:ia suurimmalle osalle spektrejä harvoja poikkeuksia lukuunottamatta. Ongelmat johtuvat mallissa olevasta matemaattisesta ominaisuudesta. Spektrieromittari näyttää metameerisille spektreille nollasta poikkeavan arvon, vaikka CIEDE2000 näyttää nollaa. Tämä osoittaa spektrieromittarin oikeamman toiminnan CIEDE2000-värieroon verrattuna.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that the numerical solutions of incompressible viscous flows are of great importance in Fluid Dynamics. The graphics output capabilities of their computational codes have revolutionized the communication of ideas to the non-specialist public. In general those codes include, in their hydrodynamic features, the visualization of flow streamlines - essentially a form of contour plot showing the line patterns of the flow - and the magnitudes and orientations of their velocity vectors. However, the standard finite element formulation to compute streamlines suffers from the disadvantage of requiring the determination of boundary integrals, leading to cumbersome implementations at the construction of the finite element code. In this article, we introduce an efficient way - via an alternative variational formulation - to determine the streamlines for fluid flows, which does not need the computation of contour integrals. In order to illustrate the good performance of the alternative formulation proposed, we capture the streamlines of three viscous models: Stokes, Navier-Stokes and Viscoelastic flows.