963 resultados para cumulative sum


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il Test di Risposta Termica (Thermal Response Test-TRT) (Mogenson,1983) è il test esistente con il più alto grado di accuratezza per la caratterizzazione del reservoir geotermico superficiale. Il test consiste in una simulazione in situ del funzionamento di un sistema a circuito chiuso di sfruttamento dell’energia geotermica, per un periodo limitato di tempo, attraverso l’iniezione o estrazione di calore a potenza costante all’interno del geo-scambiatore (Borehole Heat Exchanger-BHE). Dall’analisi della variazione delle temperature del fluido circolante, è possibile avere una stima delle proprietà termiche medie del volume del reservoir geotermico interessato dal test. Le grandezze principali per la caratterizzazione di un serbatoio geotermico sono la conduttività termica (λ), la capacità termica volumetrica (c), la temperatura indisturbata del suolo (Tg) e la resistenza termica del pozzo (Rb); la loro determinazione è necessaria per il corretto progettazione degli geo-scambiatori. I risultati del TRT sono tuttavia sensibili alle condizioni al contorno spazio-temporali quali ad es.: variazione della temperatura del terreno, movimento d’acqua di falda, condizioni metereologiche, eventi stagionali, ecc. Questo lavoro vuole: i) introdurre uno studio sui problemi di caratterizzazione del reservoir geotermico superficiale, in particolare analizzando l’effetto che il movimento d’acqua di falda ha sui parametri termici; ii) analizzare la sensitività dei risultati del test alle variabilità dei parametri caratteristici del funzionamento delle attrezzature. Parte del lavoro della mia tesi è stata svolta in azienda per un periodo di 4 mesi presso la “Groenholland Geo Energy systems” che ha sede ad Amsterdam in Olanda. Tre diversi esperimenti sono stati realizzati sullo stesso sito (stratigrafia nota del terreno: argilla, sabbia fine e sabbia grossa) usando una sonda profonda 30 metri e diversi pozzi per l’estrazione d’acqua e per monitorare gli effetti in prossimità del geo scambiatore. I risultati degli esperimenti sono stati molto diversi tra di loro, non solo in termini di dati registrati (temperature del fluido termovettore), ma in termini dei valori dei parametri ottenuti elaborando i dati. In particolare non è sufficiente adottare il modello classico della sorgente lineare infinita (Infinite Line Source Solution- ILS) (Ingersoll and Plass, 1948), il quale descrive il trasferimento di calore per conduzione in un mezzo omogeneo indefinito a temperatura costante. Infatti, lo scambio di calore avviene anche tramite convezione causata dal movimento d’acqua di falda, non identificabile mediante gli approcci classici tipo CUSUM test (Cumulative Sum test) (Brown e altri,1975) Lo studio della tesi vuole dare un quadro di riferimento per correlare la variabilità dei risultati con la variabilità delle condizioni al contorno. L’analisi integra le metodologie classiche (ILS) con un approccio geostatistico utile a comprendere i fenomeni e fluttuazioni che caratterizzano il test. Lo studio delle principali variabili e parametri del test, quali temperatura in ingresso e uscita del fluido termovettore, portata del fluido e potenza iniettata o estratta, è stato sviluppato mediante: il variogramma temporale, ovvero la semivarianza dell’accrescimento, che esprime il tipo di autocorrelazione temporale della variabile in esame; la covarianza incrociata temporale, ovvero la covarianza fra due variabili del sistema, che ne definisce quantitativamente il grado di correlazione in funzionamento del loro sfasamento temporale. L’approccio geostatistico proposto considera la temperatura del fluido Tf come una funzione aleatoria (FA) non stazionaria nel tempo (Chiles, 1999), il cui trend è formalmente definito, ma deve essere identificato numericamente. Si considera quindi un classico modello a residuo; in cui la FA è modellizzata come la somma di un termine deterministico, la media (il valore atteso) m(t),coincidente col modello descritto dalla teoria della sorgente lineare infinità, e di un termine aleatorio, la fluttuazione, Y(t). Le variabili portata e potenza sono invece considerate delle funzioni aleatorie stazionarie nel tempo, ovvero a media costante. Da questo studio di Tesi si sono raggiunte delle conclusioni molto importanti per lo studio del TRT: Confronto tra gli esperimenti in estrazione di calore, con e senza movimento d’acqua di falda: si studia l’effetto indotto dalla falda sul TRT. E’ possibile caratterizzare quantitativamente l’incremento della conducibilità termica equivalente legata a fenomeni convettivi dovuti al movimento d’acqua di falda. Inoltre, i variogrammi sperimentali evidenziano periodicità simili nei due casi e legate al funzionamento della pompa di calore e della componentistica associata ed alla circolazione del fluido termovettore all’interno della sonda. Tuttavia, la componente advettiva ha un effetto di smorzamento sulle piccole periodicità dei variogrammi, ma di aumento dell’ampiezza delle periodicità maggiori a causa del funzionamento della pompa di calore che deve fornire maggiore energia al sistema per bilanciare le dispersioni dovute al movimento d’acqua di falda. Confronto fra estrazione ed iniezione di calore, con movimento d’acqua di falda: si studia la significatività dei risultati nei due casi. L’analisi delle variografie evidenzia significative differenze nella struttura dei variogrammi sperimentali. In particolare, nel test con iniezione di calore i variogrammi sperimentali delle temperature hanno valori sistematicamente inferiori, circostanza che assicura una migliore precisione nella stima dei parametri termici. Quindi eseguire il TRT in iniezione di calore risulta più preciso. Dall’analisi dei variogrammi sperimentali delle singole variabili quali temperatura del fluido in ingresso e uscita all’interno del geoscambiatore è stato confermato il fenomeno di smorzamento delle oscillazioni da parte del terreno. Dall’analisi delle singole variabili del test (temperature, potenza, portata) è stata confermata l’indipendenza temporale fra portate e temperature. Ciò è evidenziato dalle diverse strutture dei variogrammi diretti e dalle covarianze incrociate prossime a zero. Mediante correlogrami è stato dimostrato la possibilità di calcolare il tempo impiegato dal fluido termovettore per circolare all’interno della sonda. L’analisi geostatistica ha permesso quindi di studiare in dettaglio la sensitività dei risultati del TRT alle diverse condizioni al contorno, quelle legate al reservoir e quelle legate al funzionamento delle attrezzature

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Robotic-assisted laparoscopic surgery (RALS) is evolving as an important surgical approach in the field of colorectal surgery. We aimed to evaluate the learning curve for RALS procedures involving resections of the rectum and rectosigmoid. METHODS: A series of 50 consecutive RALS procedures were performed between August 2008 and September 2009. Data were entered into a retrospective database and later abstracted for analysis. The surgical procedures included abdominoperineal resection (APR), anterior rectosigmoidectomy (AR), low anterior resection (LAR), and rectopexy (RP). Demographic data and intraoperative parameters including docking time (DT), surgeon console time (SCT), and total operative time (OT) were analyzed. The learning curve was evaluated using the cumulative sum (CUSUM) method. RESULTS: The procedures performed for 50 patients (54% male) included 25 AR (50%), 15 LAR (30%), 6 APR (12%), and 4 RP (8%). The mean age of the patients was 54.4 years, the mean BMI was 27.8 kg/m(2), and the median American Society of Anesthesiologists (ASA) classification was 2. The series had a mean DT of 14 min, a mean SCT of 115.1 min, and a mean OT of 246.1 min. The DT and SCT accounted for 6.3% and 46.8% of the OT, respectively. The SCT learning curve was analyzed. The CUSUM(SCT) learning curve was best modeled as a parabola, with equation CUSUM(SCT) in minutes equal to 0.73 × case number(2) - 31.54 × case number - 107.72 (R = 0.93). The learning curve consisted of three unique phases: phase 1 (the initial 15 cases), phase 2 (the middle 10 cases), and phase 3 (the subsequent cases). Phase 1 represented the initial learning curve, which spanned 15 cases. The phase 2 plateau represented increased competence with the robotic technology. Phase 3 was achieved after 25 cases and represented the mastery phase in which more challenging cases were managed. CONCLUSIONS: The three phases identified with CUSUM analysis of surgeon console time represented characteristic stages of the learning curve for robotic colorectal procedures. The data suggest that the learning phase was achieved after 15 to 25 cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION Dexmedetomidine was shown in two European randomized double-blind double-dummy trials (PRODEX and MIDEX) to be non-inferior to propofol and midazolam in maintaining target sedation levels in mechanically ventilated intensive care unit (ICU) patients. Additionally, dexmedetomidine shortened the time to extubation versus both standard sedatives, suggesting that it may reduce ICU resource needs and thus lower ICU costs. Considering resource utilization data from these two trials, we performed a secondary, cost-minimization analysis assessing the economics of dexmedetomidine versus standard care sedation. METHODS The total ICU costs associated with each study sedative were calculated on the basis of total study sedative consumption and the number of days patients remained intubated, required non-invasive ventilation, or required ICU care without mechanical ventilation. The daily unit costs for these three consecutive ICU periods were set to decline toward discharge, reflecting the observed reduction in mean daily Therapeutic Intervention Scoring System (TISS) points between the periods. A number of additional sensitivity analyses were performed, including one in which the total ICU costs were based on the cumulative sum of daily TISS points over the ICU period, and two further scenarios, with declining direct variable daily costs only. RESULTS Based on pooled data from both trials, sedation with dexmedetomidine resulted in lower total ICU costs than using the standard sedatives, with a difference of €2,656 in the median (interquartile range) total ICU costs-€11,864 (€7,070 to €23,457) versus €14,520 (€7,871 to €26,254)-and €1,649 in the mean total ICU costs. The median (mean) total ICU costs with dexmedetomidine compared with those of propofol or midazolam were €1,292 (€747) and €3,573 (€2,536) lower, respectively. The result was robust, indicating lower costs with dexmedetomidine in all sensitivity analyses, including those in which only direct variable ICU costs were considered. The likelihood of dexmedetomidine resulting in lower total ICU costs compared with pooled standard care was 91.0% (72.4% versus propofol and 98.0% versus midazolam). CONCLUSIONS From an economic point of view, dexmedetomidine appears to be a preferable option compared with standard sedatives for providing light to moderate ICU sedation exceeding 24 hours. The savings potential results primarily from shorter time to extubation. TRIAL REGISTRATION ClinicalTrials.gov NCT00479661 (PRODEX), NCT00481312 (MIDEX).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objetivo del presente trabajo fue determinar la Evapotranspiración real (ETR) a nivel regional utilizando la información del satélite meteorológico NOAA-AVHRR y comparar los resultados obtenidos con los calculados a partir de un modelo de simulación de balance hídrico. Para la estimación de la ETR se analizaron 30 imágenes que abarcan el oasis Norte de Mendoza. Con la información de los canales C1 (Visible) y C2 (IRC) se obtuvo el índice verde normalizado (NDVI), a través del cual se siguió la evolución anual de la vegetación y con la correspondiente al Infrarrojo térmico (C4 y C5) se calculó la Temperatura de superficie (Ts) por el método Split - Windows Luego se vinculó la Ts calculada por teledetección con la temperatura del aire (Ta), para finalmente calcular la suma acumulada de las diferencias entre Ts y Ta, conocida como SDD (stress degree day) que permite estimar globalmente las características de stress hídrico a nivel regional. Conociendo (Ts-Ta) se estimó la ETR a partir de la radiación neta y de los coeficientes A y B que se estimaron según las características de la cobertura vegetal, aplicando una relación simplificada a partir del balance de energía, desarrollado por Jackson (1977) y Seguin (1983) según la ecuación: ETR = Rn + A -B ( Ts - Ta ) Posteriormente, se incluyó en los cálculos los valores de Emisividad y se hizo variar el coeficiente B de acuerdo a la ocupación del suelo en cada uno de los polígonos en que fue dividida el área de estudio. En la etapa final se compararon estadísticamente los datos de ETR estimados por los distintos métodos con los simulados por el modelo y se obtuvo como conclusión final que: la estimación de la ETR a nivel regional mediante datos satelitales, se adapta muy bien a la mayoría de los casos y es sencilla de calcular, por lo que la metodología desarrollada es fácilmente extrapolable a otros oasis de la región.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cognitive wireless sensor network (CWSN) is a new paradigm, integrating cognitive features in traditional wireless sensor networks (WSNs) to mitigate important problems such as spectrum occupancy. Security in cognitive wireless sensor networks is an important problem since these kinds of networks manage critical applications and data. The specific constraints of WSN make the problem even more critical, and effective solutions have not yet been implemented. Primary user emulation (PUE) attack is the most studied specific attack deriving from new cognitive features. This work discusses a new approach, based on anomaly behavior detection and collaboration, to detect the primary user emulation attack in CWSN scenarios. Two non-parametric algorithms, suitable for low-resource networks like CWSNs, have been used in this work: the cumulative sum and data clustering algorithms. The comparison is based on some characteristics such as detection delay, learning time, scalability, resources, and scenario dependency. The algorithms have been tested using a cognitive simulator that provides important results in this area. Both algorithms have shown to be valid in order to detect PUE attacks, reaching a detection rate of 99% and less than 1% of false positives using collaboration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las redes de sensores inalámbricas son uno de los sectores con más crecimiento dentro de las redes inalámbricas. La rápida adopción de estas redes como solución para muchas nuevas aplicaciones ha llevado a un creciente tráfico en el espectro radioeléctrico. Debido a que las redes inalámbricas de sensores operan en las bandas libres Industrial, Scientific and Medical (ISM) se ha producido una saturación del espectro que en pocos años no permitirá un buen funcionamiento. Con el objetivo de solucionar este tipo de problemas ha aparecido el paradigma de Radio Cognitiva (CR). La introducción de las capacidades cognitivas en las redes inalámbricas de sensores permite utilizar estas redes para aplicaciones con unos requisitos más estrictos respecto a fiabilidad, cobertura o calidad de servicio. Estas redes que aúnan todas estas características son llamadas redes de sensores inalámbricas cognitivas (CWSNs). La mejora en prestaciones de las CWSNs permite su utilización en aplicaciones críticas donde antes no podían ser utilizadas como monitorización de estructuras, de servicios médicos, en entornos militares o de vigilancia. Sin embargo, estas aplicaciones también requieren de otras características que la radio cognitiva no nos ofrece directamente como, por ejemplo, la seguridad. La seguridad en CWSNs es un aspecto poco desarrollado al ser una característica no esencial para su funcionamiento, como pueden serlo el sensado del espectro o la colaboración. Sin embargo, su estudio y mejora es esencial de cara al crecimiento de las CWSNs. Por tanto, esta tesis tiene como objetivo implementar contramedidas usando las nuevas capacidades cognitivas, especialmente en la capa física, teniendo en cuenta las limitaciones con las que cuentan las WSNs. En el ciclo de trabajo de esta tesis se han desarrollado dos estrategias de seguridad contra ataques de especial importancia en redes cognitivas: el ataque de simulación de usuario primario (PUE) y el ataque contra la privacidad eavesdropping. Para mitigar el ataque PUE se ha desarrollado una contramedida basada en la detección de anomalías. Se han implementado dos algoritmos diferentes para detectar este ataque: el algoritmo de Cumulative Sum y el algoritmo de Data Clustering. Una vez comprobado su validez se han comparado entre sí y se han investigado los efectos que pueden afectar al funcionamiento de los mismos. Para combatir el ataque de eavesdropping se ha desarrollado una contramedida basada en la inyección de ruido artificial de manera que el atacante no distinga las señales con información del ruido sin verse afectada la comunicación que nos interesa. También se ha estudiado el impacto que tiene esta contramedida en los recursos de la red. Como resultado paralelo se ha desarrollado un marco de pruebas para CWSNs que consta de un simulador y de una red de nodos cognitivos reales. Estas herramientas han sido esenciales para la implementación y extracción de resultados de la tesis. ABSTRACT Wireless Sensor Networks (WSNs) are one of the fastest growing sectors in wireless networks. The fast introduction of these networks as a solution in many new applications has increased the traffic in the radio spectrum. Due to the operation of WSNs in the free industrial, scientific, and medical (ISM) bands, saturation has ocurred in these frequencies that will make the same operation methods impossible in the future. Cognitive radio (CR) has appeared as a solution for this problem. The networks that join all the mentioned features together are called cognitive wireless sensor networks (CWSNs). The adoption of cognitive features in WSNs allows the use of these networks in applications with higher reliability, coverage, or quality of service requirements. The improvement of the performance of CWSNs allows their use in critical applications where they could not be used before such as structural monitoring, medical care, military scenarios, or security monitoring systems. Nevertheless, these applications also need other features that cognitive radio does not add directly, such as security. The security in CWSNs has not yet been explored fully because it is not necessary field for the main performance of these networks. Instead, other fields like spectrum sensing or collaboration have been explored deeply. However, the study of security in CWSNs is essential for their growth. Therefore, the main objective of this thesis is to study the impact of some cognitive radio attacks in CWSNs and to implement countermeasures using new cognitive capabilities, especially in the physical layer and considering the limitations of WSNs. Inside the work cycle of this thesis, security strategies against two important kinds of attacks in cognitive networks have been developed. These attacks are the primary user emulator (PUE) attack and the eavesdropping attack. A countermeasure against the PUE attack based on anomaly detection has been developed. Two different algorithms have been implemented: the cumulative sum algorithm and the data clustering algorithm. After the verification of these solutions, they have been compared and the side effects that can disturb their performance have been analyzed. The developed approach against the eavesdropping attack is based on the generation of artificial noise to conceal information messages. The impact of this countermeasure on network resources has also been studied. As a parallel result, a new framework for CWSNs has been developed. This includes a simulator and a real network with cognitive nodes. This framework has been crucial for the implementation and extraction of the results presented in this thesis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A grande prevalência do consumo de álcool por mulheres em idade reprodutiva aliada à gravidez não planejada expõe a gestante a um elevado risco de se alcoolizar em algum momento da gestação, principalmente no início do período gestacional em que a maioria delas ainda não tomou ciência do fato. Assim, torna-se extremamente relevante o desenvolvimento de métodos de detecção precoce de recém-nascidos em risco de desenvolvimento de problemas do espectro dos transtornos relacionados à exposição fetal ao álcool. O objetivo desse estudo foi desenvolver, validar e avaliar a eficácia de um método de quantificação de ésteres etílicos de ácidos graxos (FAEEs) no mecônio de recém-nascidos para avaliação da exposição fetal ao álcool. Os FAEEs avaliados foram: palmitato de etila, estearato de etila, oleato de etila e linoleato de etila.O método consistiu no preparo das amostras pela extração líquido-líquido utilizando água, acetona e hexano, seguida de extração em fase sólida empregando cartuchos de aminopropilsilica. A separação e quantificação dos analitos foi realizada por cromatografia em fase gasosa acoplada à espectrometria de massas. Os limites de quantificação (LQ) variaram entre 50-100ng/g. A curva de calibração foi linear de LQ até 2000ng/g para todos os analitos. A recuperação variou de 69,79% a 106,57%. Os analitos demonstraram estabilidade no ensaio de pós-processamento e em solução. O método foi aplicado em amostras de mecônio de 160 recém-nascidos recrutados em uma maternidade pública de Ribeirão Preto-SP. O consumo de álcool materno foi reportado utilizando questionários de rastreamento validados T-ACE e AUDIT e relatos retrospectivos da quantidade e frequência de álcool consumida ao longo da gestação. A eficácia do método analítico em identificar os casos positivos foi determinada pela curva Receiver Operating Characteristic (ROC). O consumo alcoólico de risco foi identificado pelo T-ACE em 31,3% das participantes e 50% reportaram o uso de álcool durante a gestação. 51,3% dos recém-nascidos apresentaram FAEEs em seu mecônio, sendo que 33,1% apresentaram altas concentrações para a somatória dos FAEEs (maior que 500ng/g), compatível com um consumo abusivo de álcool. O oleato de etila foi o biomarcador mais prevalente e o linoleato de etila foi o biomarcador que apresentou as maiores concentrações. Houve uma variabilidade no perfil de distribuição dos FAEEs entre os indivíduos, e discordâncias entre a presença de FAEEs e o consumo reportado pela mãe. A concentração total dos FAEEs nos mecônio mostrou-se como melhor indicador da exposição fetal ao álcool quando comparado com o uso de um único biomarcador. O ponto de corte para esta população foi de aproximadamente 600ng/g para uso tipo binge (três ou mais doses por ocasião) com sensibilidade de 71,43% e especificidade de 84,37%. Este estudo reforça a importância da utilização de métodos laboratoriais na identificação da exposição fetal ao álcool.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In January 2001 Greece joined the eurozone. The aim of this article is to examine whether an intention to join the eurozone had any impact on exchange rate volatility. We apply the Iterated Cumulative Sum of Squares (ICSS) algorithm of Inclan and Tiao (1994) to a set of Greek drachma exchange rate changes. We find evidence to suggest that the unconditional volatility of the drachma exchange rate against the dollar, British pound, yen, German mark and ECU/Euro was nonstationary, exhibiting a large number of volatility changes prior to European Monetary Union (EMU) membership. We then use a news archive service to identify the events that might have caused exchange rate volatility to shift. We find that devaluation of the drachma increased exchange rate volatility but ERM membership and a commitment to joining the eurozone led to lower volatility. Our findings therefore suggest that a strong commitment to join the eurozone may be sufficient to reduce some exchange rate volatility which has implications for countries intending to join the eurozone in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent research has suggested that the A and B share markets of China may be informationally segmented. In this paper volatility patterns in the A and B share market are studied to establish whether volatility changes to the A and B share markets are synchronous. A consequence of new information, when investors act upon it is that volatility rises. This means that if the A and B markets are perfectly integrated volatility changes to each market would be expected to occur at the same time. However, if they are segmented there is no reason for volatility changes to occur on the same day. Using the iterative cumulative sum of squares across the different markets. Evidence is found of integration between the two A share markets but not between the A and B markets. © 2005 Taylor & Francis Group Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this paper is to examine the short term dynamics of foreign exchange rate spreads. Using a vector autoregressive model (VAR) we show that most of the variation in the spread comes from the long run dependencies between past and future spreads rather than being caused by changes in inventory, adverse selection, cost of carry or order processing costs. We apply the Integrated Cumulative Sum of Squares (ICSS) algorithm of Inclan and Tiao (1994) to discover how often spread volatility changes. We find that spread volatility shifts are relatively uncommon and shifts in one currency spread tend not to spillover to other currency spreads. © 2013.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Study of Recent abyssal benthic foraminifera from core-top samples in the eastern equatorial Indian Ocean has identified distinctive faunas whose distribution patterns reflect the major hydrographic features of the region. Above 3800 m, Indian Deep Water (IDW) is characterized by a diverse and evenly-distributed biofacies to which Globocassidulina subglobosa, Pyrgo spp., Uvigerina peregrina, and Eggerella bradyi are the major contributors. Nuttalides umbonifera and Epistominella exigua are associated with Indian Bottom Water (IBW) below 3800 m. Within the IBW fauna, N. umbonifera and E. exigua are characteristic of two biofacies with independent distribution patterns. Nuttalides umbonifera systematically increases in abundance with increasing water depth. The E. exigua biofacies reaches its greatest abundance in sediments on the eastern flank of the Ninetyeast Ridge and in the Wharton-Cocos Basin. The hydrographic transition between IDW and IBW coincides with the level of transition from waters supersaturated to waters undersaturated with respect to calcite and with the depth of the lysocline. Carbonate saturation levels, possibly combined with the effects of selective dissolution on the benthic foraminiferal populations, best explain the change in faunas across the IDW/IBW boundary and the bathymetric distribution pattern of N. umbonifera. The distribution of the E. exigua fauna cannot be explained with this model. Epistominella exigua is associated with the colder, more oxygenated IBW of the Wharton-Cocos Basin. The distribution of this biofacies on the eastern flank of the Ninetyeast Ridge agrees well with the calculated bathymetric position of the northward flowing deep boundary current which aerates the eastern basins of the Indian Ocean.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic Although corporate entrepreneurship is of vital importance for long-term firm survival and growth (Zahra and Covin, 1995), researchers still struggle with understanding how to manage corporate entrepreneurship activities. Corporate entrepreneurship consists of three parts: innovation, venturing, and renewal processes (Guth and Ginsberg, 1990). Innovation refers to the development of new products, venturing to the creation of new businesses, and renewal to redefining existing businesses (Sharma, and Chrisman, 1999; Verbeke et al., 2007). Although there are many studies focusing on one of these aspects (cf. Burgelman, 1985; Huff et al., 1992), it is very difficult to compare the outcomes of these studies due to differences in contexts, measures, and methodologies. This is a significant lack in our understanding of CE, as firms engage in all three aspects of CE, making it important to compare managerial and organizational antecedents of innovation, venturing and renewal processes. Because factors that may enhance venturing activities may simultaneously inhibit renewal activities. The limited studies that did empirically compare the individual dimensions (cf. Zahra, 1996; Zahra et al., 2000; Yiu and Lau, 2008; Yiu et al., 2007) generally failed to provide a systematic explanation for potential different effects of organizational antecedents on innovation, venturing, and renewal. With this study we aim to investigate the different effects of structural separation and social capital on corporate entrepreneurship activities. The access to existing and the development of new knowledge has been deemed of critical importance in CE-activities (Floyd and Wooldridge, 1999; Covin and Miles, 2007; Katila and Ahuja, 2002). Developing new knowledge can be facilitated by structurally separating corporate entrepreneurial units from mainstream units (cf. Burgelman, 1983; Hill and Rothaermel, 2003; O'Reilly and Tushman, 2004). Existing knowledge and resources are available through networks of social relationships, defined as social capital (Nahapiet and Ghoshal, 1998; Yiu and Lau, 2008). Although social capital has primarily been studied at the organizational level, it might be equally important at top management level (Belliveau et al., 1996). However, little is known about the joint effects of structural separation and integrative mechanisms to provide access to social capital on corporate entrepreneurship. Could these integrative mechanisms for example connect the separated units to facilitate both knowledge creation and sharing? Do these effects differ for innovation, venturing, and renewal processes? Are the effects different for organizational versus top management team integration mechanisms? Corporate entrepreneurship activities have for example been suggested to take place at different levels. Whereas innovation is suggested to be a more bottom-up process, strategic renewal is a more top-down process (Floyd and Lane, 2000; Volberda et al., 2001). Corporate venturing is also a more bottom-up process, but due to the greater required resource commitments relative to innovation, it ventures need to be approved by top management (Burgelman, 1983). As such we will explore the following key research question in this paper: How do social capital and structural separation on organizational and TMT level differentially influence innovation, venturing, and renewal processes? Methodology/Key Propositions We investigated our hypotheses on a final sample of 240 companies in a variety of industries in the Netherlands. All our measures were validated in previous studies. We targeted a second respondent in each firm to reduce problems with single-rater data (James et al., 1984). We separated the measurement of the independent and the dependent variables in two surveys to create a one-year time lag and reduce potential common method bias (Podsakoff et al., 2003). Results and Implications Consistent with our hypotheses, our results show that configurations of structural separation and integrative mechanisms have different effects on the three aspects of corporate entrepreneurship. Innovation was affected by organizational level mechanisms, renewal by integrative mechanisms on top management team level and venturing by mechanisms on both levels. Surprisingly, our results indicated that integrative mechanisms on top management team level had negative effects on corporate entrepreneurship activities. We believe this paper makes two significant contributions. First, we provide more insight in what the effects of ambidextrous organizational forms (i.e. combinations of differentiation and integration mechanisms) are on venturing, innovation and renewal processes. Our findings show that more valuable insights can be gained by comparing the individual parts of corporate entrepreneurship instead of focusing on the whole. Second, we deliver insights in how management can create a facilitative organizational context for these corporate entrepreneurship activities.