994 resultados para Future opportunity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the feasibility of the floating-gate technology in analog computing platforms in a scaled down general-purpose CMOS technology is considered. When the technology is scaled down the performance of analog circuits tends to get worse because the process parameters are optimized for digital transistors and the scaling involves the reduction of supply voltages. Generally, the challenge in analog circuit design is that all salient design metrics such as power, area, bandwidth and accuracy are interrelated. Furthermore, poor flexibility, i.e. lack of reconfigurability, the reuse of IP etc., can be considered the most severe weakness of analog hardware. On this account, digital calibration schemes are often required for improved performance or yield enhancement, whereas high flexibility/reconfigurability can not be easily achieved. Here, it is discussed whether it is possible to work around these obstacles by using floating-gate transistors (FGTs), and analyze problems associated with the practical implementation. FGT technology is attractive because it is electrically programmable and also features a charge-based built-in non-volatile memory. Apart from being ideal for canceling the circuit non-idealities due to process variations, the FGTs can also be used as computational or adaptive elements in analog circuits. The nominal gate oxide thickness in the deep sub-micron (DSM) processes is too thin to support robust charge retention and consequently the FGT becomes leaky. In principle, non-leaky FGTs can be implemented in a scaled down process without any special masks by using “double”-oxide transistors intended for providing devices that operate with higher supply voltages than general purpose devices. However, in practice the technology scaling poses several challenges which are addressed in this thesis. To provide a sufficiently wide-ranging survey, six prototype chips with varying complexity were implemented in four different DSM process nodes and investigated from this perspective. The focus is on non-leaky FGTs, but the presented autozeroing floating-gate amplifier (AFGA) demonstrates that leaky FGTs may also find a use. The simplest test structures contain only a few transistors, whereas the most complex experimental chip is an implementation of a spiking neural network (SNN) which comprises thousands of active and passive devices. More precisely, it is a fully connected (256 FGT synapses) two-layer spiking neural network (SNN), where the adaptive properties of FGT are taken advantage of. A compact realization of Spike Timing Dependent Plasticity (STDP) within the SNN is one of the key contributions of this thesis. Finally, the considerations in this thesis extend beyond CMOS to emerging nanodevices. To this end, one promising emerging nanoscale circuit element - memristor - is reviewed and its applicability for analog processing is considered. Furthermore, it is discussed how the FGT technology can be used to prototype computation paradigms compatible with these emerging two-terminal nanoscale devices in a mature and widely available CMOS technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this Master’s thesis was to study the business model development in Finnish newspaper industry during the next then years through scenario planning. The objective was to see how will the business models develop amidst the many changes in the industry, what factors are affecting the change, what are the implications of these changes for the players in the industry and how should the Finnish newspaper companies evolve in order to succeed in the future. In this thesis the business model change is studied based on all the elements of business models, as it was discovered that the industry is too often focusing on changes in only few of those elements and a more broader view can provide valuable information for the companies. The results revealed that the industry is affected by many changes during the next ten years. Scenario planning provides a good tool for analyzing this change and for developing valuable options for businesses. After conducting series of interviews and discovering forces affecting the change, four different scenarios were developed centered on the role that newspaper will take and the level at which they are providing the content in the future. These scenarios indicated that there are varieties of options in the way the business models may develop and that companies should start making decisions proactively in order to succeed. As the business model elements are interdepended, changes made in the other elements will affect the whole model, making these decisions about the role and level of content important for the companies. In the future, it is likely that the Finnish newspaper industry will include many different kinds of business models, some of which can be drastically different from the current ones and some of which can still be similar, but take better into account the new kind of media environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With a Sales and Operations Planning (S&OP) process, a company aims to manage the demand and supply by planning and forecasting. The studied company uses an integrated S&OP process to improve the company's operations. The aim of this thesis is to develop this business process by finding the best possible way to manage the soft information in S&OP, whilst also understanding the importance and types (assumptions, risks and opportunities) of soft information in S&OP. The soft information in S&OP helps to refine future S&OP planning, taking into account the uncertainties that affect the balance of the long-term demand and supply (typically 12-18 months). The literature review was used to create a framework for soft information management process in S&OP. There were not found a concrete way how to manage soft information in the existing literature. In consequence of the poor literature available the Knowledge Management literature was used as the base for the framework creation, which was seen in the very same type of information management like the soft information management is. The framework created a four-stage process to manage soft information in S&OP that included also the required support systems. First phase is collecting and acquiring soft information in S&OP, which include also categorization. The categorization was the cornerstone to identify different requirements that needs to be taken into consideration when managing soft information in S&OP process. The next phase focus on storing data, which purpose is to ensure the soft information is managed in a common system (support system) in a way that the following phase makes it available to users in S&OP who need by help of sharing and applications process. The last phase target is to use the soft information to understand assumptions and thoughts of users behind the numbers in S&OP plans. With this soft management process the support system will have a key role. The support system, like S&OP tool, ensures that soft information is stored in the right places, kept up-to-date and relevancy. The soft information management process in S&OP strives to improve the relevant soft information documenting behind the S&OP plans into the S&OP support system. The process offers an opportunity to individuals to review, comment and evaluate soft information in S&OP made by their own or others. In the case company it was noticed that without a properly documented and distributed soft information in S&OP it was seen to cause mistrust towards the planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently widely accepted consensus is that greenhouse gas emissions produced by the mankind have to be reduced in order to avoid further global warming. The European Union has set a variety of CO2 reduction and renewable generation targets for its member states. The current energy system in the Nordic countries is one of the most carbon free in the world, but the aim is to achieve a fully carbon neutral energy system. The objective of this thesis is to consider the role of nuclear power in the future energy system. Nuclear power is a low carbon energy technology because it produces virtually no air pollutants during operation. In this respect, nuclear power is suitable for a carbon free energy system. In this master's thesis, the basic characteristics of nuclear power are presented and compared to fossil fuelled and renewable generation. Nordic energy systems and different scenarios in 2050 are modelled. Using models and information about the basic characteristics of nuclear power, an opinion is formed about its role in the future energy system in Nordic countries. The model shows that it is possible to form a carbon free Nordic energy system. Nordic countries benefit from large hydropower capacity which helps to offset fluctuating nature of wind power. Biomass fuelled generation and nuclear power provide stable and predictable electricity throughout the year. Nuclear power offers better energy security and security of supply than fossil fuelled generation and it is competitive with other low carbon technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Repowering existing power plants by replacing coal with biomass might offer an interesting option to ease the transition from fossil fuels to renewable energy sources and promote a fur-ther expansion of bioenergy in Europe, on account of the potential to decrease greenhouse gas emissions, as well as other pollutants (SOx, NOx, etcetera). In addition, a great part of the appeal of repowering projects comes from the opportunity to reuse the vast existing invest-ment and infrastructure associated with coal-based power generation. Even so, only a limited number of experiences with repowering are found. Therefore, efforts are required to produce technical and scientific evidence to determine whether said technology might be considered feasible for its adoption within European conditions. A detailed evaluation of the technical and economic aspects of this technology constitutes a powerful tool for decision makers to define the energy future for Europe. To better illustrate this concept, a case study is analyzed. A Slovakian pulverized coal plant was used as the basis for determining the effects on perfor-mance, operation, maintenance and cost when fuel is shifted to biomass. It was found that biomass fuel properties play a crucial role in plant repowering. Furthermore, results demon-strate that this technology offers renewable energy with low pollutant emissions at the cost of reduced capacity, relatively high levelized cost of electricity and sometimes, a maintenance-intensive operation. Lastly, regardless of the fact that existing equipment can be reutilized for the most part, extensive additions/modifications may be required to ensure a safe operation and an acceptable performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recovery boilers are built all over the world. The roots of recovery technology are longer than the roots of recovery boilers. But it wasn’t until the invention of recovery boilers before the Second World War that the pulping technology was revolutionalized. This led to long development of essentially the same type of equipment, culminating into units that are largest biofuel boilers in the world. Early recovery technology concentrated on chemical recovery as chemicals cost money and if one could recycle these chemicals then the profitability of pulp manufacture would improve. For pulp mills the significance of electricity generation from the recovery boiler was for long secondary. The most important design criterion for the recovery boiler was a high availability. The electricity generation in recovery boiler process can be increased by elevated main steam pressure and temperature or by higher black liquor dry solids as well as improving its steam cycle. This has been done in the modern Scandinavian units.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process management refers to improving the key functions of a company. The main functions of the case company - project management, procurement, finance, and human resource - use their own separate systems. The case company is in the process of changing its software. Different functions will use the same system in the future. This software change causes changes in some of the company’s processes. Project cash flow forecasting process is one of the changing processes. Cash flow forecasting ensures the sufficiency of money and prepares for possible changes in the future. This will help to ensure the company’s viability. The purpose of the research is to describe a new project cash flow forecasting process. In addition, the aim is to analyze the impacts of the process change, with regard to the project control department’s workload and resources through the process measurement, and how the impacts take the department’s future operations into account. The research is based on process management. Processes, their descriptions, and the way the process management uses the information, are discussed in the theory part of this research. The theory part is based on literature and articles. Project cash flow and forecasting-related benefits are also discussed. After this, the project cash flow forecasting as-is and to-be processes are described by utilizing information, obtained from the theoretical part, as well as the know-how of the project control department’s personnel. Written descriptions and cross-functional flowcharts are used for descriptions. Process measurement is based on interviews with the personnel – mainly cost controllers and department managers. The process change and the integration of two processes will allow work time for other things, for example, analysis of costs. In addition to the quality of the cash flow information will improve compared to the as-is process. Analyzing the department’s other main processes, department’s roles, and their responsibilities should be checked and redesigned. This way, there will be an opportunity to achieve the best possible efficiency and cost savings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the loss of therapeutic efficacy associated with the development of resistance to lamivudine (LMV) and the availability of new alternative treatments for chronic hepatitis B patients, early detection of viral genotypic resistance could allow the clinician to consider therapy modification before viral breakthrough and biochemical relapse occur. To this end, 28 LMV-treated patients (44 ± 12 years; 24 men), on their first therapy schedule, were monitored monthly at four Brazilian centers for the emergence of drug resistance using the reverse hybridization-based INNO-LiPA HBV DR assay and occasionally sequencing (two cases). Positive viral responses (HBV DNA clearance) after 6, 12, and 18 months of therapy were achieved by 57, 68, and 53% of patients, while biochemical responses (serum alanine aminotransferase normalization) were observed in 82, 82, and 53% of cases. All viral breakthrough cases (N = 8) were related to the emergence of YMDD variants observed in 7, 21, and 35% of patients at 6, 12, and 18 months, respectively. The emergence of these variants was not associated with viral genotype, HBeAg expression status, or pretreatment serum alanine aminotransferase levels. The detection of resistance-associated mutations was observed before the corresponding biochemical flare (41 ± 14 and 60 ± 15 weeks) in the same individuals. Then, if highly sensitive LMV drug resistance testing is carried out at frequent and regular intervals, the relatively long period (19 ± 2 weeks) between the emergence of viral resistance and the onset of biochemical relapse can provide clinicians with ample time to re-evaluate drug therapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discovery of non-adrenergic, non-cholinergic neurotransmission in the gut and bladder in the early 1960's is described as well as the identification of adenosine 5'-triphosphate (ATP) as a transmitter in these nerves in the early 1970's. The concept of purinergic cotransmission was formulated in 1976 and it is now recognized that ATP is a cotransmitter in all nerves in the peripheral and central nervous systems. Two families of receptors to purines were recognized in 1978, P1 (adenosine) receptors and P2 receptors sensitive to ATP and adenosine diphosphate (ADP). Cloning of these receptors in the early 1990's was a turning point in the acceptance of the purinergic signalling hypothesis and there are currently 4 subtypes of P1 receptors, 7 subtypes of P2X ion channel receptors and 8 subtypes of G protein-coupled receptors. Both short-term purinergic signalling in neurotransmission, neuromodulation and neurosecretion and long-term (trophic) purinergic signalling of cell proliferation, differentiation, motility, death in development and regeneration are recognized. There is now much known about the mechanisms underlying ATP release and extracellular breakdown by ecto-nucleotidases. The recent emphasis on purinergic neuropathology is discussed, including changes in purinergic cotransmission in development and ageing and in bladder diseases and hypertension. The involvement of neuron-glial cell interactions in various diseases of the central nervous system, including neuropathic pain, trauma and ischemia, neurodegenerative diseases, neuropsychiatric disorders and epilepsy are also considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.