35 resultados para low-rate distributed denial of service (DDoS) attack
Resumo:
Intravital imaging has revealed that T cells change their migratory behavior during physiological activation inside lymphoid tissue. Yet, it remains less well investigated how the intrinsic migratory capacity of activated T cells is regulated by chemokine receptor levels or other regulatory elements. Here, we used an adjuvant-driven inflammation model to examine how motility patterns corresponded with CCR7, CXCR4, and CXCR5 expression levels on ovalbumin-specific DO11.10 CD4(+) T cells in draining lymph nodes. We found that while CCR7 and CXCR4 surface levels remained essentially unaltered during the first 48-72 h after activation of CD4(+) T cells, their in vitro chemokinetic and directed migratory capacity to the respective ligands, CCL19, CCL21, and CXCL12, was substantially reduced during this time window. Activated T cells recovered from this temporary decrease in motility on day 6 post immunization, coinciding with increased migration to the CXCR5 ligand CXCL13. The transiently impaired CD4(+) T cell motility pattern correlated with increased LFA-1 expression and augmented phosphorylation of the microtubule regulator Stathmin on day 3 post immunization, yet neither microtubule destabilization nor integrin blocking could reverse TCR-imprinted unresponsiveness. Furthermore, protein kinase C (PKC) inhibition did not restore chemotactic activity, ruling out PKC-mediated receptor desensitization as mechanism for reduced migration in activated T cells. Thus, we identify a cell-intrinsic, chemokine receptor level-uncoupled decrease in motility in CD4(+) T cells shortly after activation, coinciding with clonal expansion. The transiently reduced ability to react to chemokinetic and chemotactic stimuli may contribute to the sequestering of activated CD4(+) T cells in reactive peripheral lymph nodes, allowing for integration of costimulatory signals required for full activation.
Resumo:
Smart et al. (2014) suggested that the detection of nitrate spikes in polar ice cores from solar energetic particle (SEP) events could be achieved if an analytical system with sufficiently high resolution was used. Here we show that the spikes they associate with SEP events are not reliably recorded in cores from the same location, even when the resolution is clearly adequate. We explain the processes that limit the effective resolution of ice cores. Liquid conductivity data suggest that the observed spikes are associated with sodium or another nonacidic cation, making it likely that they result from deposition of sea salt or similar aerosol that has scavenged nitrate, rather than from a primary input of nitrate in the troposphere. We consider that there is no evidence at present to support the identification of any spikes in nitrate as representing SEP events. Although such events undoubtedly create nitrate in the atmosphere, we see no plausible route to using nitrate spikes to document the statistics of such events.
Resumo:
This study quantitatively investigated the analgesic action of a low-dose constant-rate-infusion (CRI) of racemic ketamine (as a 0.5 mg kg(-1) bolus and at a dose rate of 10 microg kg(-1) min(-1)) in conscious dogs using a nociceptive withdrawal reflex (NWR) and with enantioselective measurement of plasma levels of ketamine and norketamine. Withdrawal reflexes evoked by transcutaneous single and repeated electrical stimulation (10 pulses, 5 Hz) of the digital plantar nerve were recorded from the biceps femoris muscle using surface electromyography. Ketamine did not affect NWR thresholds or the recruitment curves after a single nociceptive stimulation. Temporal summation (as evaluated by repeated stimuli) and the evoked behavioural response scores were however reduced compared to baseline demonstrating the antinociceptive activity of ketamine correlated with the peak plasma concentrations. Thereafter the plasma levels at pseudo-steady-state did not modulate temporal summation. Based on these experimental findings low-dose ketamine CRI cannot be recommended for use as a sole analgesic in the dog.
Resumo:
Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.