895 resultados para Tilted-time window model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of my PhD thesis has been to face the issue of retrieving a three dimensional attenuation model in volcanic areas. To this purpose, I first elaborated a robust strategy for the analysis of seismic data. This was done by performing several synthetic tests to assess the applicability of spectral ratio method to our purposes. The results of the tests allowed us to conclude that: 1) spectral ratio method gives reliable differential attenuation (dt*) measurements in smooth velocity models; 2) short signal time window has to be chosen to perform spectral analysis; 3) the frequency range over which to compute spectral ratios greatly affects dt* measurements. Furthermore, a refined approach for the application of spectral ratio method has been developed and tested. Through this procedure, the effects caused by heterogeneities of propagation medium on the seismic signals may be removed. The tested data analysis technique was applied to the real active seismic SERAPIS database. It provided a dataset of dt* measurements which was used to obtain a three dimensional attenuation model of the shallowest part of Campi Flegrei caldera. Then, a linearized, iterative, damped attenuation tomography technique has been tested and applied to the selected dataset. The tomography, with a resolution of 0.5 km in the horizontal directions and 0.25 km in the vertical direction, allowed to image important features in the off-shore part of Campi Flegrei caldera. High QP bodies are immersed in a high attenuation body (Qp=30). The latter is well correlated with low Vp and high Vp/Vs values and it is interpreted as a saturated marine and volcanic sediments layer. High Qp anomalies, instead, are interpreted as the effects either of cooled lava bodies or of a CO2 reservoir. A pseudo-circular high Qp anomaly was detected and interpreted as the buried rim of NYT caldera.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alpine snowbeds are habitats where the major limiting factors for plant growth are herbivory and a small time window for growth due to late snowmelt. Despite these limitations, snowbed vegetation usually forms a dense carpet of palatable plants due to favourable abiotic conditions for plant growth within the short growing season. These environmental characteristics make snowbeds particularly interesting to study the interplay of facilitation and competition. We hypothesised an interplay between resource competition and facilitation against herbivory. Further, we investigated whether these predicted neighbour effects were species-specific and/or dependent on ontogeny, and whether the balance of positive and negative plant–plant interactions shifted along a snowmelt gradient. We determined the neighbour effects by means of neighbour removal experiments along the snowmelt gradient, and linear mixed model analyses. The results showed that the effects of neighbour removal were weak but generally consistent among species and snowmelt dates, and depended on whether biomass production or survival was considered. Higher total biomass and increased fruiting in removal plots indicated that plants competed for nutrients, water, and light, thereby supporting the hypothesis of prevailing competition for resources in snowbeds. However, the presence of neighbours reduced herbivory and thereby also facilitated survival. For plant growth the facilitative effects against herbivores in snowbeds counterbalanced competition for resources, leading to a weak negative net effect. Overall the neighbour effects were not species-specific and did not change with snowmelt date. Our finding of counterbalancing effects of competition and facilitation within a plant community is of special theoretical value for species distribution models and can explain the success of models that give primary importance to abiotic factors and tend to overlook interrelations between biotic and abiotic effects on plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The original cefepime product was withdrawn from the Swiss market in January 2007 and replaced by a generic 10 months later. The goals of the study were to assess the impact of this cefepime shortage on the use and costs of alternative broad-spectrum antibiotics, on antibiotic policy, and on resistance of Pseudomonas aeruginosa toward carbapenems, ceftazidime, and piperacillin-tazobactam. A generalized regression-based interrupted time series model assessed how much the shortage changed the monthly use and costs of cefepime and of selected alternative broad-spectrum antibiotics (ceftazidime, imipenem-cilastatin, meropenem, piperacillin-tazobactam) in 15 Swiss acute care hospitals from January 2005 to December 2008. Resistance of P. aeruginosa was compared before and after the cefepime shortage. There was a statistically significant increase in the consumption of piperacillin-tazobactam in hospitals with definitive interruption of cefepime supply and of meropenem in hospitals with transient interruption of cefepime supply. Consumption of each alternative antibiotic tended to increase during the cefepime shortage and to decrease when the cefepime generic was released. These shifts were associated with significantly higher overall costs. There was no significant change in hospitals with uninterrupted cefepime supply. The alternative antibiotics for which an increase in consumption showed the strongest association with a progression of resistance were the carbapenems. The use of alternative antibiotics after cefepime withdrawal was associated with a significant increase in piperacillin-tazobactam and meropenem use and in overall costs and with a decrease in susceptibility of P. aeruginosa in hospitals. This warrants caution with regard to shortages and withdrawals of antibiotics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain activity relies on transient, fluctuating interactions between segregated neuronal populations. Synchronization within a single and between distributed neuronal clusters reflects the dynamics of these cooperative patterns. Thus absence epilepsy can be used as a model for integrated, large-scale investigation of the emergence of pathological collective dynamics in the brain. Indeed, spike-wave discharges (SWD) of an absence seizure are thought to reflect abnormal cortical hypersynchronization. In this paper, we address two questions: how and where do SWD arise in the human brain? Therefore, we explored the spatio-temporal dynamics of interactions within and between widely distributed cortical sites using magneto-encephalographic recordings of spontaneous absence seizures. We then extracted, from their time-frequency analysis, local synchronization of cortical sources and long-range synchronization linking distant sites. Our analyses revealed a reproducible sequence of 1) long-range desynchronization, 2) increased local synchronization and 3) increased long-range synchronization. Although both local and long-range synchronization displayed different spatio-temporal profiles, their cortical projection within an initiation time window overlap and reveal a multifocal fronto-central network. These observations contradict the classical view of sudden generalized synchronous activities in absence epilepsy. Furthermore, they suggest that brain states transition may rely on multi-scale processes involving both local and distant interactions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rationale: Focal onset epileptic seizures are due to abnormal interactions between distributed brain areas. By estimating the cross-correlation matrix of multi-site intra-cerebral EEG recordings (iEEG), one can quantify these interactions. To assess the topology of the underlying functional network, the binary connectivity matrix has to be derived from the cross-correlation matrix by use of a threshold. Classically, a unique threshold is used that constrains the topology [1]. Our method aims to set the threshold in a data-driven way by separating genuine from random cross-correlation. We compare our approach to the fixed threshold method and study the dynamics of the functional topology. Methods: We investigate the iEEG of patients suffering from focal onset seizures who underwent evaluation for the possibility of surgery. The equal-time cross-correlation matrices are evaluated using a sliding time window. We then compare 3 approaches assessing the corresponding binary networks. For each time window: * Our parameter-free method derives from the cross-correlation strength matrix (CCS)[2]. It aims at disentangling genuine from random correlations (due to finite length and varying frequency content of the signals). In practice, a threshold is evaluated for each pair of channels independently, in a data-driven way. * The fixed mean degree (FMD) uses a unique threshold on the whole connectivity matrix so as to ensure a user defined mean degree. * The varying mean degree (VMD) uses the mean degree of the CCS network to set a unique threshold for the entire connectivity matrix. * Finally, the connectivity (c), connectedness (given by k, the number of disconnected sub-networks), mean global and local efficiencies (Eg, El, resp.) are computed from FMD, CCS, VMD, and their corresponding random and lattice networks. Results: Compared to FMD and VMD, CCS networks present: *topologies that are different in terms of c, k, Eg and El. *from the pre-ictal to the ictal and then post-ictal period, topological features time courses that are more stable within a period, and more contrasted from one period to the next. For CCS, pre-ictal connectivity is low, increases to a high level during the seizure, then decreases at offset. k shows a ‘‘U-curve’’ underlining the synchronization of all electrodes during the seizure. Eg and El time courses fluctuate between the corresponding random and lattice networks values in a reproducible manner. Conclusions: The definition of a data-driven threshold provides new insights into the topology of the epileptic functional networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE Inverse relationship between onset-to-door time (ODT) and door-to-needle time (DNT) in stroke thrombolysis was reported from various registries. We analyzed this relationship and other determinants of DNT in dedicated stroke centers. METHODS Prospectively collected data of consecutive ischemic stroke patients from 10 centers who received IV thrombolysis within 4.5 hours from symptom onset were merged (n=7106). DNT was analyzed as a function of demographic and prehospital variables using regression analyses, and change over time was considered. RESULTS In 6348 eligible patients with known treatment delays, median DNT was 42 minutes and kept decreasing steeply every year (P<0.001). Median DNT of 55 minutes was observed in patients with ODT ≤30 minutes, whereas it declined for patients presenting within the last 30 minutes of the 3-hour time window (median, 33 minutes) and of the 4.5-hour time window (20 minutes). For ODT within the first 30 minutes of the extended time window (181-210 minutes), DNT increased to 42 minutes. DNT was stable for ODT for 30 to 150 minutes (40-45 minutes). We found a weak inverse overall correlation between ODT and DNT (R(2)=-0.12; P<0.001), but it was strong in patients treated between 3 and 4.5 hours (R(2)=-0.75; P<0.001). ODT was independently inversely associated with DNT (P<0.001) in regression analysis. Octogenarians and women tended to have longer DNT. CONCLUSIONS DNT was decreasing steeply over the last years in dedicated stroke centers; however, significant oscillations of in-hospital treatment delays occurred at both ends of the time window. This suggests that further improvements can be achieved, particularly in the elderly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International air travel has already spread Ebola virus disease (EVD) to major cities as part of the unprecedented epidemic that started in Guinea in December 2013. An infected airline passenger arrived in Nigeria on July 20, 2014 and caused an outbreak in Lagos and then Port Harcourt. After a total of 20 reported cases, including 8 deaths, Nigeria was declared EVD free on October 20, 2014. We quantified the impact of early control measures in preventing further spread of EVD in Nigeria and calculated the risk that a single undetected case will cause a new outbreak. We fitted an EVD transmission model to data from the outbreak in Nigeria and estimated the reproduction number of the index case at 9.0 (95% confidence interval [CI]: 5.2-15.6). We also found that the net reproduction number fell below unity 15 days (95% CI: 11-21 days) after the arrival of the index case. Hence, our study illustrates the time window for successful containment of EVD outbreaks caused by infected air travelers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Linear- and unimodal-based inference models for mean summer temperatures (partial least squares, weighted averaging, and weighted averaging partial least squares models) were applied to a high-resolution pollen and cladoceran stratigraphy from Gerzensee, Switzerland. The time-window of investigation included the Allerød, the Younger Dryas, and the Preboreal. Characteristic major and minor oscillations in the oxygen-isotope stratigraphy, such as the Gerzensee oscillation, the onset and end of the Younger Dryas stadial, and the Preboreal oscillation, were identified by isotope analysis of bulk-sediment carbonates of the same core and were used as independent indicators for hemispheric or global scale climatic change. In general, the pollen-inferred mean summer temperature reconstruction using all three inference models follows the oxygen-isotope curve more closely than the cladoceran curve. The cladoceran-inferred reconstruction suggests generally warmer summers than the pollen-based reconstructions, which may be an effect of terrestrial vegetation not being in equilibrium with climate due to migrational lags during the Late Glacial and early Holocene. Allerød summer temperatures range between 11 and 12°C based on pollen, whereas the cladoceran-inferred temperatures lie between 11 and 13°C. Pollen and cladocera-inferred reconstructions both suggest a drop to 9–10°C at the beginning of the Younger Dryas. Although the Allerød–Younger Dryas transition lasted 150–160 years in the oxygen-isotope stratigraphy, the pollen-inferred cooling took 180–190 years and the cladoceran-inferred cooling lasted 250–260 years. The pollen-inferred summer temperature rise to 11.5–12°C at the transition from the Younger Dryas to the Preboreal preceded the oxygen-isotope signal by several decades, whereas the cladoceran-inferred warming lagged. Major discrepancies between the pollen- and cladoceran-inference models are observed for the Preboreal, where the cladoceran-inference model suggests mean summer temperatures of up to 14–15°C. Both pollen- and cladoceran-inferred reconstructions suggest a cooling that may be related to the Gerzensee oscillation, but there is no evidence for a cooling synchronous with the Preboreal oscillation as recorded in the oxygen-isotope record. For the Gerzensee oscillation the inferred cooling was ca. 1 and 0.5°C based on pollen and cladocera, respectively, which lies well within the inherent prediction errors of the inference models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intravital imaging has revealed that T cells change their migratory behavior during physiological activation inside lymphoid tissue. Yet, it remains less well investigated how the intrinsic migratory capacity of activated T cells is regulated by chemokine receptor levels or other regulatory elements. Here, we used an adjuvant-driven inflammation model to examine how motility patterns corresponded with CCR7, CXCR4, and CXCR5 expression levels on ovalbumin-specific DO11.10 CD4(+) T cells in draining lymph nodes. We found that while CCR7 and CXCR4 surface levels remained essentially unaltered during the first 48-72 h after activation of CD4(+) T cells, their in vitro chemokinetic and directed migratory capacity to the respective ligands, CCL19, CCL21, and CXCL12, was substantially reduced during this time window. Activated T cells recovered from this temporary decrease in motility on day 6 post immunization, coinciding with increased migration to the CXCR5 ligand CXCL13. The transiently impaired CD4(+) T cell motility pattern correlated with increased LFA-1 expression and augmented phosphorylation of the microtubule regulator Stathmin on day 3 post immunization, yet neither microtubule destabilization nor integrin blocking could reverse TCR-imprinted unresponsiveness. Furthermore, protein kinase C (PKC) inhibition did not restore chemotactic activity, ruling out PKC-mediated receptor desensitization as mechanism for reduced migration in activated T cells. Thus, we identify a cell-intrinsic, chemokine receptor level-uncoupled decrease in motility in CD4(+) T cells shortly after activation, coinciding with clonal expansion. The transiently reduced ability to react to chemokinetic and chemotactic stimuli may contribute to the sequestering of activated CD4(+) T cells in reactive peripheral lymph nodes, allowing for integration of costimulatory signals required for full activation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present and examine a multi-sensor global compilation of mid-Holocene (MH) sea surface temperatures (SST), based on Mg/Ca and alkenone palaeothermometry and reconstructions obtained using planktonic foraminifera and organic-walled dinoflagellate cyst census counts. We assess the uncertainties originating from using different methodologies and evaluate the potential of MH SST reconstructions as a benchmark for climate-model simulations. The comparison between different analytical approaches (time frame, baseline climate) shows the choice of time window for the MH has a negligible effect on the reconstructed SST pattern, but the choice of baseline climate affects both the magnitude and spatial pattern of the reconstructed SSTs. Comparison of the SST reconstructions made using different sensors shows significant discrepancies at a regional scale, with uncertainties often exceeding the reconstructed SST anomaly. Apparent patterns in SST may largely be a reflection of the use of different sensors in different regions. Overall, the uncertainties associated with the SST reconstructions are generally larger than the MH anomalies. Thus, the SST data currently available cannot serve as a target for benchmarking model simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los avances en el hardware permiten disponer de grandes volúmenes de datos, surgiendo aplicaciones que deben suministrar información en tiempo cuasi-real, la monitorización de pacientes, ej., el seguimiento sanitario de las conducciones de agua, etc. Las necesidades de estas aplicaciones hacen emerger el modelo de flujo de datos (data streaming) frente al modelo almacenar-para-despuésprocesar (store-then-process). Mientras que en el modelo store-then-process, los datos son almacenados para ser posteriormente consultados; en los sistemas de streaming, los datos son procesados a su llegada al sistema, produciendo respuestas continuas sin llegar a almacenarse. Esta nueva visión impone desafíos para el procesamiento de datos al vuelo: 1) las respuestas deben producirse de manera continua cada vez que nuevos datos llegan al sistema; 2) los datos son accedidos solo una vez y, generalmente, no son almacenados en su totalidad; y 3) el tiempo de procesamiento por dato para producir una respuesta debe ser bajo. Aunque existen dos modelos para el cómputo de respuestas continuas, el modelo evolutivo y el de ventana deslizante; éste segundo se ajusta mejor en ciertas aplicaciones al considerar únicamente los datos recibidos más recientemente, en lugar de todo el histórico de datos. En los últimos años, la minería de datos en streaming se ha centrado en el modelo evolutivo. Mientras que, en el modelo de ventana deslizante, el trabajo presentado es más reducido ya que estos algoritmos no sólo deben de ser incrementales si no que deben borrar la información que caduca por el deslizamiento de la ventana manteniendo los anteriores tres desafíos. Una de las tareas fundamentales en minería de datos es la búsqueda de agrupaciones donde, dado un conjunto de datos, el objetivo es encontrar grupos representativos, de manera que se tenga una descripción sintética del conjunto. Estas agrupaciones son fundamentales en aplicaciones como la detección de intrusos en la red o la segmentación de clientes en el marketing y la publicidad. Debido a las cantidades masivas de datos que deben procesarse en este tipo de aplicaciones (millones de eventos por segundo), las soluciones centralizadas puede ser incapaz de hacer frente a las restricciones de tiempo de procesamiento, por lo que deben recurrir a descartar datos durante los picos de carga. Para evitar esta perdida de datos, se impone el procesamiento distribuido de streams, en concreto, los algoritmos de agrupamiento deben ser adaptados para este tipo de entornos, en los que los datos están distribuidos. En streaming, la investigación no solo se centra en el diseño para tareas generales, como la agrupación, sino también en la búsqueda de nuevos enfoques que se adapten mejor a escenarios particulares. Como ejemplo, un mecanismo de agrupación ad-hoc resulta ser más adecuado para la defensa contra la denegación de servicio distribuida (Distributed Denial of Services, DDoS) que el problema tradicional de k-medias. En esta tesis se pretende contribuir en el problema agrupamiento en streaming tanto en entornos centralizados y distribuidos. Hemos diseñado un algoritmo centralizado de clustering mostrando las capacidades para descubrir agrupaciones de alta calidad en bajo tiempo frente a otras soluciones del estado del arte, en una amplia evaluación. Además, se ha trabajado sobre una estructura que reduce notablemente el espacio de memoria necesario, controlando, en todo momento, el error de los cómputos. Nuestro trabajo también proporciona dos protocolos de distribución del cómputo de agrupaciones. Se han analizado dos características fundamentales: el impacto sobre la calidad del clustering al realizar el cómputo distribuido y las condiciones necesarias para la reducción del tiempo de procesamiento frente a la solución centralizada. Finalmente, hemos desarrollado un entorno para la detección de ataques DDoS basado en agrupaciones. En este último caso, se ha caracterizado el tipo de ataques detectados y se ha desarrollado una evaluación sobre la eficiencia y eficacia de la mitigación del impacto del ataque. ABSTRACT Advances in hardware allow to collect huge volumes of data emerging applications that must provide information in near-real time, e.g., patient monitoring, health monitoring of water pipes, etc. The data streaming model emerges to comply with these applications overcoming the traditional store-then-process model. With the store-then-process model, data is stored before being consulted; while, in streaming, data are processed on the fly producing continuous responses. The challenges of streaming for processing data on the fly are the following: 1) responses must be produced continuously whenever new data arrives in the system; 2) data is accessed only once and is generally not maintained in its entirety, and 3) data processing time to produce a response should be low. Two models exist to compute continuous responses: the evolving model and the sliding window model; the latter fits best with applications must be computed over the most recently data rather than all the previous data. In recent years, research in the context of data stream mining has focused mainly on the evolving model. In the sliding window model, the work presented is smaller since these algorithms must be incremental and they must delete the information which expires when the window slides. Clustering is one of the fundamental techniques of data mining and is used to analyze data sets in order to find representative groups that provide a concise description of the data being processed. Clustering is critical in applications such as network intrusion detection or customer segmentation in marketing and advertising. Due to the huge amount of data that must be processed by such applications (up to millions of events per second), centralized solutions are usually unable to cope with timing restrictions and recur to shedding techniques where data is discarded during load peaks. To avoid discarding of data, processing of streams (such as clustering) must be distributed and adapted to environments where information is distributed. In streaming, research does not only focus on designing for general tasks, such as clustering, but also in finding new approaches that fit bests with particular scenarios. As an example, an ad-hoc grouping mechanism turns out to be more adequate than k-means for defense against Distributed Denial of Service (DDoS). This thesis contributes to the data stream mining clustering technique both for centralized and distributed environments. We present a centralized clustering algorithm showing capabilities to discover clusters of high quality in low time and we provide a comparison with existing state of the art solutions. We have worked on a data structure that significantly reduces memory requirements while controlling the error of the clusters statistics. We also provide two distributed clustering protocols. We focus on the analysis of two key features: the impact on the clustering quality when computation is distributed and the requirements for reducing the processing time compared to the centralized solution. Finally, with respect to ad-hoc grouping techniques, we have developed a DDoS detection framework based on clustering.We have characterized the attacks detected and we have evaluated the efficiency and effectiveness of mitigating the attack impact.