939 resultados para Markov process modeling


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this demo the basic text mining technologies by using RapidMining have been reviewed. RapidMining basic characteristics and operators of text mining have been described. Text mining example by using Navie Bayes algorithm and process modeling have been revealed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

2000 Mathematics Subject Classi cation: 49L60, 60J60, 93E20.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The production of recombinant therapeutic proteins is an active area of research in drug development. These bio-therapeutic drugs target nearly 150 disease states and promise to bring better treatments to patients. However, if new bio-therapeutics are to be made more accessible and affordable, improvements in production performance and optimization of processes are necessary. A major challenge lies in controlling the effect of process conditions on production of intact functional proteins. To achieve this, improved tools are needed for bio-processing. For example, implementation of process modeling and high-throughput technologies can be used to achieve quality by design, leading to improvements in productivity. Commercially, the most sought after targets are secreted proteins due to the ease of handling in downstream procedures. This chapter outlines different approaches for production and optimization of secreted proteins in the host Pichia pastoris. © 2012 Springer Science+business Media, LLC.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a growing societal need to address the increasing prevalence of behavioral health issues, such as obesity, alcohol or drug use, and general lack of treatment adherence for a variety of health problems. The statistics, worldwide and in the USA, are daunting. Excessive alcohol use is the third leading preventable cause of death in the United States (with 79,000 deaths annually), and is responsible for a wide range of health and social problems. On the positive side though, these behavioral health issues (and associated possible diseases) can often be prevented with relatively simple lifestyle changes, such as losing weight with a diet and/or physical exercise, or learning how to reduce alcohol consumption. Medicine has therefore started to move toward finding ways of preventively promoting wellness, rather than solely treating already established illness. Evidence-based patient-centered Brief Motivational Interviewing (BMI) interven- tions have been found particularly effective in helping people find intrinsic motivation to change problem behaviors after short counseling sessions, and to maintain healthy lifestyles over the long-term. Lack of locally available personnel well-trained in BMI, however, often limits access to successful interventions for people in need. To fill this accessibility gap, Computer-Based Interventions (CBIs) have started to emerge. Success of the CBIs, however, critically relies on insuring engagement and retention of CBI users so that they remain motivated to use these systems and come back to use them over the long term as necessary. Because of their text-only interfaces, current CBIs can therefore only express limited empathy and rapport, which are the most important factors of health interventions. Fortunately, in the last decade, computer science research has progressed in the design of simulated human characters with anthropomorphic communicative abilities. Virtual characters interact using humans’ innate communication modalities, such as facial expressions, body language, speech, and natural language understanding. By advancing research in Artificial Intelligence (AI), we can improve the ability of artificial agents to help us solve CBI problems. To facilitate successful communication and social interaction between artificial agents and human partners, it is essential that aspects of human social behavior, especially empathy and rapport, be considered when designing human-computer interfaces. Hence, the goal of the present dissertation is to provide a computational model of rapport to enhance an artificial agent’s social behavior, and to provide an experimental tool for the psychological theories shaping the model. Parts of this thesis were already published in [LYL+12, AYL12, AL13, ALYR13, LAYR13, YALR13, ALY14].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.

This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.

Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.

Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a model for availability analysis of standalone hybrid microgrid. The microgrid used in the study consists of wind, solar storage and diesel generator. Boolean driven Markov process is used to develop the availability of the system in the proposed method. By modifying the developed model, the relationship between the availability of the system with the fine (normal) weather and disturbed (stormy) weather durations are analyzed. Effects of different converter technologies on the availability of standalone microgrid were investigated and the results have shown that the availability of microgrid increased by 5.80 % when a storage system is added. On the other hand, the availability of standalone microgrid could be overestimated by 3.56 % when weather factor is neglected. In the same way 200, 500 and 1000 hours of disturbed weather durations reduced the availability of the system by 5.36%, 9.73% and 13.05 %, respectively. In addition, the hybrid energy storage cascade topology with a capacitor in the middle maximized the system availability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Queueing Theory is the mathematical study of queues or waiting lines. Queues abound in every day life - in computer networks, in tra c islands, in communication of electro-magnetic signals, in telephone exchange, in bank counters, in super market checkouts, in doctor's clinics, in petrol pumps, in o ces where paper works to be processed and many other places. Originated with the published work of A. K. Erlang in 1909 [16] on congestion in telephone tra c, Queueing Theory has grown tremendously in a century. Its wide range applications includes Operations Research, Computer Science, Telecommunications, Tra c Engineering, Reliability Theory, etc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The process of building Data Warehouses (DW) is well known with well defined stages but at the same time, mostly carried out manually by IT people in conjunction with business people. Web Warehouses (WW) are DW whose data sources are taken from the web. We define a flexible WW, which can be configured accordingly to different domains, through the selection of the web sources and the definition of data processing characteristics. A Business Process Management (BPM) System allows modeling and executing Business Processes (BPs) providing support for the automation of processes. To support the process of building flexible WW we propose a two BPs level: a configuration process to support the selection of web sources and the definition of schemas and mappings, and a feeding process which takes the defined configuration and loads the data into the WW. In this paper we present a proof of concept of both processes, with focus on the configuration process and the defined data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Supply chains have become an important focus for competitive advantage. The performance of a company increasingly depends on its ability to maintain effective and efficient relationships with its suppliers and customers. The extended enterprise (i.e. composed of several partners) needs to be dynamically formed in order to be agile and adaptable. According to the Digital Manufacturing paradigm, companies have to be able to quickly share and disseminate information regarding planning, designing and manufacturing of products. Additionally, they must be responsive to all technical and business determinants, as well as be assessed and certified for guaranteed performance. The current research intends to present a solution for the dynamic composition of the extended enterprise, formed to take advantage of market opportunities quickly and efficiently. A construction model was developed. This construction model consists of: information model, protocol model and process model. The information model has been defined based on the concepts of Supply Chain Operations Reference model (SCOR®). In this model is defined information for negotiating the participation of candidate companies in the dynamic establishment of a network for responding to a given demand for developing and manufacturing products, in seven steps as follows: request for information; request for qualification; alignment of strategy; request for proposal; request for quotation; compatibility of process; and compatibility of system. The protocol model has been elaborated and inspired in the OSI, this model provides a framework for linking customers and suppliers, indicates a sequence to be followed, in order to selecte companies to become suppliers. The process model has been implemented by means of process modeling according to the BPMN standard and, in turn, implemented as a web-based application that runs the process through its several steps, which uses forms to gather data. An application example in the context of the oil and gas industry is used for demonstrating the solution concept.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part 20: Health and Care Networks

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over 2 million Anterior Cruciate Ligament (ACL) injuries occur annually worldwide resulting in considerable economic and health burdens (e.g., suffering, surgery, loss of function, risk for re-injury, and osteoarthritis). Current screening methods are effective but they generally rely on expensive and time-consuming biomechanical movement analysis, and thus are impractical solutions. In this dissertation, I report on a series of studies that begins to investigate one potentially efficient alternative to biomechanical screening, namely skilled observational risk assessment (e.g., having experts estimate risk based on observations of athletes movements). Specifically, in Study 1 I discovered that ACL injury risk can be accurately and reliably estimated with nearly instantaneous visual inspection when observed by skilled and knowledgeable professionals. Modern psychometric optimization techniques were then used to develop a robust and efficient 5-item test of ACL injury risk prediction skill—i.e., the ACL Injury-Risk-Estimation Quiz or ACL-IQ. Study 2 cross-validated the results from Study 1 in a larger representative sample of both skilled (Exercise Science/Sports Medicine) and un-skilled (General Population) groups. In accord with research on human expertise, quantitative structural and process modeling of risk estimation indicated that superior performance was largely mediated by specific strategies and skills (e.g., ignoring irrelevant information), independent of domain general cognitive abilities (e.g., metal rotation, general decision skill). These cognitive models suggest that ACL-IQ is a trainable skill, providing a foundation for future research and applications in training, decision support, and ultimately clinical screening investigations. Overall, I present the first evidence that observational ACL injury risk prediction is possible including a robust technology for fast, accurate and reliable measurement—i.e., the ACL-IQ. Discussion focuses on applications and outreach including a web platform that was developed to house the test, provide a repository for further data collection, and increase public and professional awareness and outreach (www.ACL-IQ.org). Future directions and general applications of the skilled movement analysis approach are also discussed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Financial time series have a tendency of abruptly changing their behavior and maintain this behavior for several consecutive periods, and commodity futures returns are not an exception. This quality proposes that nonlinear models, as opposed to linear models, can more accurately describe returns and volatility. Markov regime switching models are able to match this behavior and have become a popular way to model financial time series. This study uses Markov regime switching model to describe the behavior of energy futures returns on a commodity level, because studies show that commodity futures are a heterogeneous asset class. The purpose of this thesis is twofold. First, determine how many regimes characterize individual energy commodities’ returns in different return frequencies. Second, study the characteristics of these regimes. We extent the previous studies on the subject in two ways: We allow for the possibility that the number of regimes may exceed two, as well as conduct the research on individual commodities rather than on commodity indices or subgroups of these indices. We use daily, weekly and monthly time series of Brent crude oil, WTI crude oil, natural gas, heating oil and gasoil futures returns over 1994–2014, where available, to carry out the study. We apply the likelihood ratio test to determine the sufficient number of regimes for each commodity and data frequency. Then the time series are modeled with Markov regime switching model to obtain the return distribution characteristics of each regime, as well as the transition probabilities of moving between regimes. The results for the number of regimes suggest that daily energy futures return series consist of three to six regimes, whereas weekly and monthly returns for all energy commodities display only two regimes. When the number of regimes exceeds two, there is a tendency for the time series of energy commodities to form groups of regimes. These groups are usually quite persistent as a whole because probability of a regime switch inside the group is high. However, individual regimes in these groups are not persistent and the process oscillates between these regimes frequently. Regimes that are not part of any group are generally persistent, but show low ergodic probability, i.e. rarely prevail in the market. This study also suggests that energy futures return series characterized with two regimes do not necessarily display persistent bull and bear regimes. In fact, for the majority of time series, bearish regime is considerably less persistent. Rahoituksen aikasarjoilla on taipumus arvaamattomasti muuttaa käyttäytymistään ja jatkaa tätä uutta käyttäytymistä useiden periodien ajan, eivätkä hyödykefutuurien tuotot tee tähän poikkeusta. Tämän ominaisuuden johdosta lineaaristen mallien sijasta epälineaariset mallit pystyvät tarkemmin kuvailemaan esimerkiksi tuottojen jakauman parametreja. Markov regiiminvaihtomallit pystyvät vangitsemaan tämän ominaisuuden ja siksi niistä on tullut suosittuja rahoituksen aikasarjojen mallintamisessa. Tämä tutkimus käyttää Markov regiiminvaihtomallia kuvaamaan yksittäisten energiafutuurien tuottojen käyttäytymistä, sillä tutkimukset osoittavat hyödykefutuurien olevan hyvin heterogeeninen omaisuusluokka. Tutkimuksen tarkoitus on selvittää, kuinka monta regiimiä tarvitaan kuvaamaan energiafutuurien tuottoja eri tuottofrekvensseillä ja mitkä ovat näiden regiimien ominaisuudet. Aiempaa tutkimusta aiheesta laajennetaan määrittämällä regiimien lukumäärä tilastotieteellisen testauksen menetelmin sekä tutkimalla energiafutuureja yksittäin; ei indeksi- tai alaindeksitasolla. Tutkimuksessa käytetään päivä-, viikko- ja kuukausiaikasarjoja Brent-raakaöljyn, WTI-raakaöljyn, maakaasun, lämmitysöljyn ja polttoöljyn tuotoista aikaväliltä 1994–2014, siltä osin kuin aineistoa on saatavilla. Likelihood ratio -testin avulla estimoidaan kaikille aikasarjoille regiimien määrä,jonka jälkeen Markov regiiminvaihtomallia hyödyntäen määritetään yksittäisten regiimientuottojakaumien ominaisuudet sekä regiimien välinen transitiomatriisi. Tulokset regiimien lukumäärän osalta osoittavat, että energiafutuurien päiväkohtaisten tuottojen aikasarjoissa regiimien lukumäärä vaihtelee kolmen ja kuuden välillä. Viikko- ja kuukausituottojen kohdalla kaikkien energiafutuurien prosesseissa regiimien lukumäärä on kaksi. Kun regiimejä on enemmän kuin kaksi, on prosessilla taipumus muodostaa regiimeistä koostuvia ryhmiä. Prosessi pysyy ryhmän sisällä yleensä pitkään, koska todennäköisyys siirtyä ryhmään kuuluvien regiimien välillä on suuri. Yksittäiset regiimit ryhmän sisällä eivät kuitenkaan ole kovin pysyviä. Näin ollen prosessi vaihtelee ryhmän sisäisten regiimien välillä tiuhaan. Regiimit, jotka eivät kuulu ryhmään, ovat yleensä pysyviä, mutta prosessi ajautuu niihin vain harvoin, sillä todennäköisyys siirtyä muista regiimeistä niihin on pieni. Tutkimuksen tulokset osoittavat myös, että prosesseissa, joita ohjaa kaksi regiimiä, nämä regiimit eivät välttämättä ole pysyvät bull- ja bear-markkinatilanteet. Tulokset osoittavat sen sijaan, että bear-markkinatilanne on energiafutuureissa selvästi vähemmän pysyvä.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.

In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.

By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.

Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a mathematical model for the vinyl acetate and n-butyl acrylate emulsion copolymerization process in batch reactors. The model is able to explain the effects of simultaneous changes in emulsifier concentration, initiator concentration, monomer-to-water ratio, and monomer feed composition on monomer conversion, copolymer composition and, to lesser extent, average particle size evolution histories. The main features of the system, such as the increase in the rate of polymerization as temperature, emulsifier, and initiator concentrations increase are correctly represented by the model. The model accounts for the basic features of the process and may be useful for practical applications, despite its simplicity and a reduced number of adjustable parameters.