965 resultados para Window gardening.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in flexible congestion control mechanisms. Window-based congestion control schemes use increase rules to probe available bandwidth, and decrease rules to back off when congestion is detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and packet loss rate. In this paper, we propose a novel window-based congestion control algorithm called SIMD (Square-Increase/Multiplicative-Decrease). Contrary to previous memory-less controls, SIMD utilizes history information in its control rules. It uses multiplicative decrease but the increase in window size is in proportion to the square of the time elapsed since the detection of the last loss event. Thus, SIMD can efficiently probe available bandwidth. Nevertheless, SIMD is TCP-friendly as well as TCP-compatible under RED, and it has much better convergence behavior than TCP-friendly AIMD and binomial algorithms proposed recently.
Resumo:
The problem of discovering frequent poly-regions (i.e. regions of high occurrence of a set of items or patterns of a given alphabet) in a sequence is studied, and three efficient approaches are proposed to solve it. The first one is entropy-based and applies a recursive segmentation technique that produces a set of candidate segments which may potentially lead to a poly-region. The key idea of the second approach is the use of a set of sliding windows over the sequence. Each sliding window covers a sequence segment and keeps a set of statistics that mainly include the number of occurrences of each item or pattern in that segment. Combining these statistics efficiently yields the complete set of poly-regions in the given sequence. The third approach applies a technique based on the majority vote, achieving linear running time with a minimal number of false negatives. After identifying the poly-regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a poly-region). An efficient algorithm for mining frequent arrangements of intervals is applied to the converted sequence to discover frequently occurring arrangements of poly-regions in different parts of DNA, including coding regions. The proposed algorithms are tested on various DNA sequences producing results of significant biological meaning.
Resumo:
Version 1.1 of the Hyper Text Transfer Protocol (HTTP) was principally developed as a means for reducing both document transfer latency and network traffic. The rationale for the performance enhancements in HTTP/1.1 is based on the assumption that the network is the bottleneck in Web transactions. In practice, however, the Web server can be the primary source of document transfer latency. In this paper, we characterize and compare the performance of HTTP/1.0 and HTTP/1.1 in terms of throughput at the server and transfer latency at the client. Our approach is based on considering a broader set of bottlenecks in an HTTP transfer; we examine how bottlenecks in the network, CPU, and in the disk system affect the relative performance of HTTP/1.0 versus HTTP/1.1. We show that the network demands under HTTP/1.1 are somewhat lower than HTTP/1.0, and we quantify those differences in terms of packets transferred, server congestion window size and data bytes per packet. We show that when the CPU is the bottleneck, there is relatively little difference in performance between HTTP/1.0 and HTTP/1.1. Surprisingly, we show that when the disk system is the bottleneck, performance using HTTP/1.1 can be much worse than with HTTP/1.0. Based on these observations, we suggest a connection management policy for HTTP/1.1 that can improve throughput, decrease latency, and keep network traffic low when the disk system is the bottleneck.
Resumo:
The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.
Resumo:
We propose a new technique for efficiently delivering popular content from information repositories with bounded file caches. Our strategy relies on the use of fast erasure codes (a.k.a. forward error correcting codes) to generate encodings of popular files, of which only a small sliding window is cached at any time instant, even to satisfy an unbounded number of asynchronous requests for the file. Our approach capitalizes on concurrency to maximize sharing of state across different request threads while minimizing cache memory utilization. Additional reduction in resource requirements arises from providing for a lightweight version of the network stack. In this paper, we describe the design and implementation of our Cyclone server as a Linux kernel subsystem.
Resumo:
The problem of discovering frequent arrangements of regions of high occurrence of one or more items of a given alphabet in a sequence is studied, and two efficient approaches are proposed to solve it. The first approach is entropy-based and uses an existing recursive segmentation technique to split the input sequence into a set of homogeneous segments. The key idea of the second approach is to use a set of sliding windows over the sequence. Each sliding window keeps a set of statistics of a sequence segment that mainly includes the number of occurrences of each item in that segment. Combining these statistics efficiently yields the complete set of regions of high occurrence of the items of the given alphabet. After identifying these regions, the sequence is converted to a sequence of labeled intervals (each one corresponding to a region). An efficient algorithm for mining frequent arrangements of temporal intervals on a single sequence is applied on the converted sequence to discover frequently occurring arrangements of these regions. The proposed algorithms are tested on various DNA sequences producing results with significant biological meaning.
Resumo:
This paper shows, for the first time, the implementation of a WDM subsystem at the 2μm wavelength window with mixed formats. Three wavelength channels were directly modulated withBPSK Fast-OFDM at 5Gbit/s per channel, with a fourth channel NRZ-OOK externally modulated at8.5Gbit/s giving a total capacity in excess of 20 Gbit/s.
Resumo:
This thesis is focused on the design and development of an integrated magnetic (IM) structure for use in high-power high-current power converters employed in renewable energy applications. These applications require low-cost, high efficiency and high-power density magnetic components and the use of IM structures can help achieve this goal. A novel CCTT-core split-winding integrated magnetic (CCTT IM) is presented in this thesis. This IM is optimized for use in high-power dc-dc converters. The CCTT IM design is an evolution of the traditional EE-core integrated magnetic (EE IM). The CCTT IM structure uses a split-winding configuration allowing for the reduction of external leakage inductance, which is a problem for many traditional IM designs, such as the EE IM. Magnetic poles are incorporated to help shape and contain the leakage flux within the core window. These magnetic poles have the added benefit of minimizing the winding power loss due to the airgap fringing flux as they shape the fringing flux away from the split-windings. A CCTT IM reluctance model is developed which uses fringing equations to accurately predict the most probable regions of fringing flux around the pole and winding sections of the device. This helps in the development of a more accurate model as it predicts the dc and ac inductance of the component. A CCTT IM design algorithm is developed which relies heavily on the reluctance model of the CCTT IM. The design algorithm is implemented using the mathematical software tool Mathematica. This algorithm is modular in structure and allows for the quick and easy design and prototyping of the CCTT IM. The algorithm allows for the investigation of the CCTT IM boxed volume with the variation of input current ripple, for different power ranges, magnetic materials and frequencies. A high-power 72 kW CCTT IM prototype is designed and developed for use in an automotive fuelcell-based drivetrain. The CCTT IM design algorithm is initially used to design the component while 3D and 2D finite element analysis (FEA) software is used to optimize the design. Low-cost and low-power loss ferrite 3C92 is used for its construction, and when combined with a low number of turns results in a very efficient design. A paper analysis is undertaken which compares the performance of the high-power CCTT IM design with that of two discrete inductors used in a two-phase (2L) interleaved converter. The 2L option consists of two discrete inductors constructed from high dc-bias material. Both topologies are designed for the same worst-case phase current ripple conditions and this ensures a like-for-like comparison. The comparison indicates that the total magnetic component boxed volume of both converters is similar while the CCTT IM has significantly lower power loss. Experimental results for the 72 kW, (155 V dc, 465 A dc input, 420 V dc output) prototype validate the CCTT IM concept where the component is shown to be 99.7 % efficient. The high-power experimental testing was conducted at General Motors advanced technology center in Torrence, Los Angeles. Calorific testing was used to determine the power loss in the CCTT IM component. Experimental 3.8 kW results and a 3.8 kW prototype compare and contrast the ferrite CCTT IM and high dc-bias 2L concepts over the typical operating range of a fuelcell under like-for-like conditions. The CCTT IM is shown to perform better than the 2L option over the entire power range. An 8 kW ferrite CCTT IM prototype is developed for use in photovoltaic (PV) applications. The CCTT IM is used in a boost pre-regulator as part of the PV power stage. The CCTT IM is compared with an industry standard 2L converter consisting of two discrete ferrite toroidal inductors. The magnetic components are compared for the same worst-case phase current ripple and the experimental testing is conducted over the operation of a PV panel. The prototype CCTT IM allows for a 50 % reduction in total boxed volume and mass in comparison to the baseline 2L option, while showing increased efficiency.
Resumo:
Irish monitoring data on PCDD/Fs, DL-PCBs and Marker PCBs were collated and combined with Irish Adult Food Consumption Data, to estimate dietary background exposure of Irish adults to dioxins and PCBs. Furthermore, all available information on the 2008 Irish pork dioxin food contamination incident was collated and analysed with a view to evaluate any potential impact the incident may have had on general dioxin and PCB background exposure levels estimated for the adult population in Ireland. The average upperbound daily intake of Irish adults to dioxins Total WHO TEQ (2005) (PCDD/Fs & DLPCBs) from environmental background contamination, was estimated at 0.3 pg/kg bw/d and at the 95th percentile at 1 pg/kg bw/d. The average upperbound daily intake of Irish adults to the sum of 6 Marker PCBs from environmental background contamination ubiquitous in the environment was estimated at 1.6 ng/kg bw/d and at the 95th percentile at 6.8 ng/kg bw/d. Dietary background exposure estimates for both dioxins and PCBs indicate that the Irish adult population has exposures below the European average, a finding which is also supported by the levels detected in breast milk of Irish mothers. Exposure levels are below health based guidance values and/or Body Burdens associated with the TWI (for dioxins) or associated with a NOAEL (for PCBs). Given the current toxicological knowledge, based on biomarker data and estimated dietary exposure, general background exposure of the Irish adult population to dioxins and PCBs is of no human health concern. In 2008, a porcine fat sample taken as part of the national residues monitoring programme led to the detection of a major feed contamination incidence in the Republic of Ireland. The source of the contamination was traced back to the use of contaminated oil in a direct-drying feed operation system. Congener profiles in animal fat and feed samples showed a high level of consistency and pinpointed the likely source of fuel contamination to be a highly chlorinated commercial PCB mixture. To estimate additional exposure to dioxins and PCBs due to the contamination of pig and cattle herds, collection and a systematic review of all data associated with the contamination incident was conducted. A model was devised that took into account the proportion of contaminated product reaching the final consumer during the 90 day contamination incident window. For a 90 day period, the total additional exposure to Total TEQ (PCDD/F &DL-PCB) WHO (2005) amounted to 407 pg/kg bw/90d at the 95th percentile and 1911 pg/kg bw/90d at the 99th percentile. Exposure estimates derived for both dioxins and PCBs showed that the Body Burden of the general population remained largely unaffected by the contamination incident and approximately 10 % of the adult population in Ireland was exposed to elevated levels of dioxins and PCBs. Whilst people in this 10 % cohort experienced quite a significant additional load to the existing body burden, the estimated exposure values do not indicate approximation of body burdens associated with adverse health effects, based on current knowledge. The exposure period was also limited in time to approximately 3 months, following the FSAI recall of contaminated meat immediately on detection of the contamination. A follow up breast milk study on Irish first time mothers conducted in 2009/2010 did not show any increase in concentrations compared to the study conducted in 2002. The latter supports the conclusion that the majority of the Irish adult population was not affected by the contamination incident.
Resumo:
Introduction: The prevalence of diabetes is rising rapidly. Assessing quality of diabetes care is difficult. Lower Extremity Amputation (LEA) is recognised as a marker of the quality of diabetes care. The focus of this thesis was first to describe the trends in LEA rates in people with and without diabetes in the Republic of Ireland (RoI) in recent years and then, to explore the determinants of LEA in people with diabetes. While clinical and socio-demographic determinants have been well-established, the role of service-related factors has been less well-explored. Methods: Using hospital discharge data, trends in LEA rates in people with and without diabetes were described and compared to other countries. Background work included concordance studies exploring the reliability of hospital discharge data for recording LEA and diabetes and estimation of diabetes prevalence rates in the RoI from a nationally representative study (SLAN 2007). To explore determinants, a systematic review and meta-analysis assessed the effect of contact with a podiatrist on the outcome of LEA in people with diabetes. Finally, a case-control study using hospital discharge data explored determinants of LEA in people with diabetes with a particular focus on the timing of access to secondary healthcare services as a risk factor. Results: There are high levels of agreement between hospital discharge data and medical records for LEA and diabetes. Thus, hospital discharge data was deemed sufficiently reliable for use in this PhD thesis. A decrease in major diabetes-related LEA rates in people with diabetes was observed in the RoI from 2005-2012. In 2012, the relative risk of a person with diabetes undergoing a major LEA was 6.2 times (95% CI 4.8-8.1) that of a person without diabetes. Based on the systematic review and meta-analysis, contact with a podiatrist did not significantly affect the relative risk (RR) of LEA in people with diabetes. Results from the case-control study identified being single, documented CKD and documented hypertension as significant risk factors for LEA in people with diabetes whilst documented retinopathy was protective. Within the seven year time window included in the study, no association was detected between LEA in patients with diabetes and timing of patient access to secondary healthcare for diabetes management. Discussion: Many countries have reported reduced major LEA rates in people with diabetes coinciding with improved organisation of healthcare systems. Reassuringly, these first national estimates in people with diabetes in the RoI from 2005 to 2012 demonstrated reducing trends in major LEA rates. This may be attributable to changes in diabetes care and also, secular trends in smoking, dyslipidaemia and hypertension. Consistent with international practice, LEA trends data in Ireland can be used to monitor quality of care. Quantifying this improvement precisely, though, is problematic without robust denominator data on the prevalence of diabetes. However, a reduction in major diabetes-related LEA rates suggests improved quality of diabetes care. Much controversy exists around the reliability of hospital discharge data in the RoI. This thesis includes the first multi-site study to explore this issue and found hospital discharge data reliable for the reporting of the procedure of LEA and diagnosis of diabetes. This project did not detect protective effects of access to services including podiatry and secondary healthcare for LEA in people with diabetes. A major limitation of the systematic review and meta-analysis was the design and quality of the included studies. The data available in the area of effect of contact with a podiatrist on LEA risk are too sparse to say anything definitive about the efficacy of podiatry on LEA. Limitations of the case-control study include lack of a diabetes register in Ireland, restricted information from secondary healthcare and lack of data available from primary healthcare. Due to these issues, duration of disease could not be accounted for in the study which limits the conclusions that can be drawn from the results. The model of diabetes care in the RoI is currently undergoing a re-configuration with plans to introduce integrated care. In the future, trends in LEA rates should be continuously monitored to evaluate the effectiveness of changes to the healthcare system. Efforts are already underway to improve the availability of routine data from primary healthcare with the recent development of the iPCRN (Irish Primary Care Research Network). Linkage of primary and secondary healthcare records with a unique patient identifier should be the goal for the future.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
The amygdala is a limbic structure that is involved in many of our emotions and processing of these emotions such as fear, anger and pleasure. Conditions such as anxiety, autism, and also epilepsy, have been linked to abnormal functioning of the amygdala, owing to improper neurodevelopment or damage. This thesis investigated the cellular and molecular changes in the amygdala in models of temporal lobe epilepsy (TLE) and maternal immune activation (MIA). The kainic acid (KA) model of temporal lobe epilepsy (TLE) was used to induce Ammon’s-horn sclerosis (AHS) and to investigate behavioural and cytoarchitectural changes that occur in the amygdala related to Neuropeptide Y1 receptor expression. Results showed that KA-injected animals showed increased anxiety-like behaviours and displayed histopathological hallmarks of AHS including CA1 ablation, granule cell dispersion, volume reduction and astrogliosis. Amygdalar volume and neuronal loss was observed in the ipsilateral nuclei which was accompanied by astrogliosis. In addition, a decrease in Y1 receptor expressing cells in the ipsilateral CA1 and CA3 sectors of the hippocampus, ipsi- and contralateral granule cell layer of the dentate gyrus and ipsilateral central nucleus of the amygdala was found, consistent with a reduction in Y1 receptor protein levels. The results suggest that plastic changes in hippocampal and/or amygdalar Y1 receptor expression may negatively impact anxiety levels. Gamma-aminobutyric acid (GABA) is the main inhibitory neurotransmitter in the brain and tight regulation and appropriate control of GABA is vital for neurochemical homeostasis. GABA transporter-1 (GAT-1) is abundantly expressed by neurones and astrocytes and plays a key role in GABA reuptake and regulation. Imbalance in GABA homeostasis has been implicated in epilepsy with GAT-1 being an attractive pharmacological target. Electron microscopy was used to examine the distribution, expression and morphology of GAT-1 expressing structures in the amygdala of the TLE model. Results suggest that GAT-1 was preferentially expressed on putative axon terminals over astrocytic processes in this TLE model. Myelin integrity was examined and results suggested that in the TLE model myelinated fibres were damaged in comparison to controls. Synaptic morphology was studied and results suggested that asymmetric (excitatory) synapses occurred more frequently than symmetric (inhibitory) synapses in the TLE model in comparison to controls. This study illustrated that the amygdala undergoes ultrastructural alterations in this TLE model. Maternal immune activation (MIA) is a risk factor for neurodevelopmental disorders such as autism, schizophrenia and also epilepsy. MIA was induced at a critical window of amygdalar development at E12 using bacterial mimetic lipopolysaccharide (LPS). Results showed that MIA activates cytokine, toll-like receptor and chemokine expression in the fetal brain that is prolonged in the postnatal amygdala. Inflammation elicited by MIA may prime the fetal brain for alterations seen in the glial environment and this in turn have deleterious effects on neuronal populations as seen in the amygdala at P14. These findings may suggest that MIA induced during amygdalar development may predispose offspring to amygdalar related disorders such as heightened anxiety, fear impairment and also neurodevelopmental disorders.
Resumo:
The effect of fortification of skim milk powder and sodium caseinate on Cheddar cheeses was investigated. SMP fortification led to decreased moisture, increased yield, higher numbers of NSLAB and reduced proteolysis. The functional and texture properties were also affected by SMP addition and formed a harder, less meltable cheese than the control. NaCn fortification led to increased moisture, increased yield, decreased proteolysis and higher numbers of NSLAB. The functional and textural properties were affected by fortification with NaCn and formed a softer cheese that had similar or less melt than the control. Reducing the lactose:casein ratio of Mozzarella cheese by using ultrafiltration led to higher pH, lower insoluble calcium, lower lactose, galactose and lactic acid levels in the cheese. The texture and functional properties of the cheese was affected by varying the lactose:casein ratio and formed a harder cheese that had similar melt to the control later in ripening. The flavour and bake properties were also affected by decreased lactose:casein ratio; the cheeses had lower acid flavour and blister colour than the control cheese. Varying the ratio of αs1:β-casein in Cheddar cheese affected the texture and functionality of the cheese but did not affect insoluble calcium, proteolysis or pH. Increasing the ratio of αs1:β-casein led to cheese with lower meltability and higher hardness without adverse effects on flavour. Using camel chymosin in Mozzarella cheese instead of calf chymosin resulted in cheese with lower proteolysis, higher softening point, higher hardness and lower blister quantity. The texture and functional properties that determine the shelf life of Mozzarella were maintained for a longer ripening period than when using calf chymosin therefore increasing the window of functionality of Mozzarella. In summary, the results of the trials in this thesis show means of altering the texture, functional, rheology and sensory properties of Mozzarella and Cheddar cheeses.
Resumo:
We present measurements of morphological features in a thick turbid sample using light-scattering spectroscopy (LSS) and Fourier-domain low-coherence interferometry (fLCI) by processing with the dual-window (DW) method. A parallel frequency domain optical coherence tomography (OCT) system with a white-light source is used to image a two-layer phantom containing polystyrene beads of diameters 4.00 and 6.98 mum on the top and bottom layers, respectively. The DW method decomposes each OCT A-scan into a time-frequency distribution with simultaneously high spectral and spatial resolution. The spectral information from localized regions in the sample is used to determine scatterer structure. The results show that the two scatterer populations can be differentiated using LSS and fLCI.
Resumo:
Genome rearrangement often produces chromosomes with two centromeres (dicentrics) that are inherently unstable because of bridge formation and breakage during cell division. However, mammalian dicentrics, and particularly those in humans, can be quite stable, usually because one centromere is functionally silenced. Molecular mechanisms of centromere inactivation are poorly understood since there are few systems to experimentally create dicentric human chromosomes. Here, we describe a human cell culture model that enriches for de novo dicentrics. We demonstrate that transient disruption of human telomere structure non-randomly produces dicentric fusions involving acrocentric chromosomes. The induced dicentrics vary in structure near fusion breakpoints and like naturally-occurring dicentrics, exhibit various inter-centromeric distances. Many functional dicentrics persist for months after formation. Even those with distantly spaced centromeres remain functionally dicentric for 20 cell generations. Other dicentrics within the population reflect centromere inactivation. In some cases, centromere inactivation occurs by an apparently epigenetic mechanism. In other dicentrics, the size of the alpha-satellite DNA array associated with CENP-A is reduced compared to the same array before dicentric formation. Extra-chromosomal fragments that contained CENP-A often appear in the same cells as dicentrics. Some of these fragments are derived from the same alpha-satellite DNA array as inactivated centromeres. Our results indicate that dicentric human chromosomes undergo alternative fates after formation. Many retain two active centromeres and are stable through multiple cell divisions. Others undergo centromere inactivation. This event occurs within a broad temporal window and can involve deletion of chromatin that marks the locus as a site for CENP-A maintenance/replenishment.