973 resultados para pollen threshold values
Resumo:
Bistability and switching are two important aspects of the genetic regulatory network of phage. Positive and negative feedbacks are key regulatory mechanisms in this network. By the introduction of threshold values, the developmental pathway of A phage is divided into different stages. If the protein level reaches a threshold value, positive or negative feedback will be effective and regulate the process of development. Using this regulatory mechanism, we present a quantitative model to realize bistability and switching of phage based on experimental data. This model gives descriptions of decisive mechanisms for different pathways in induction. A stochastic model is also introduced for describing statistical properties of switching in induction. A stochastic degradation rate is used to represent intrinsic noise in induction for switching the system from the lysogenic pathway to the lysis pathway. The approach in this paper represents an attempt to describe the regulatory mechanism in genetic regulatory network under the influence of intrinsic noise in the framework of continuous models. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This study was designed to assess the effect of sequence mismatches between primers and their targets on viral quantitative PCR. Numerous primers were constructed incorporating various mismatches with a target sequence on the BKV T antigen gene. When using these primers in standard Taqman two-step cycling conditions, as few as two mismatches in a single primer increased cycle threshold values and significantly influenced the calculation of viral load. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
The long crack threshold behaviour of polycrystalline Udimet 720 has been investigated. Faceted crack growth is seen near threshold when the monotonic crack tip plastic zone is contained within the coarsest grain size. At very high load ratios R (=P min/P max) it is possiblefor the monotonic crack tip plastic zone to exceed the coarsest grain size throughout the entire crack growth regime and non1aceted structure insensitive crack growth is then seen down to threshold. Intrinsic threshold values were obtained for non1aceted and faceted crack growth using a constant K max, increasing K min, computer controlled load shedding technique (K is stress intensity factor). Very high R values are obtained at threshold using this technique (0.75-0.95), eliminating closure effects, so the intrinsic resistance of the material to crack propagation is reflected in these values. The intrinsic non1aceted threshold value ΔK th is lower (2.3 MN m -3/2) than the intrinsicfaceted ΔK th value (4.8 MN m -3/2). This is thought to reflect not only the effect of crack branching and deflection (in the faceted case) on the crack driving force, but also the inherent difference in resistance of the material to the two different crack propagation micromechanisms. © 1993 The Institute of Materials.
Resumo:
The effect of residual stresses, induced by cold water quenching, on the morphology of fatigue crack fronts has been investigated in a powder metallurgy 8090 aluminium alloy, with and without reinforcement in the form of 20 wt-%SiC particles. Residual stress measurements reveal that the surface compressive stresses developed in these materials are significantly greater than in conventional metallurgy ingot 8090, because surface yielding occurs on quenching. The yield stresses of the powder route materials are greater than those of ingot produced 8090 and hence greater surface stresses can be maintained. In fatigue, severe crack front bowing is observed in the powder formed materials as a result of the reduction of the R ratio (minimum load/maximum load) by the compressive residual stresses at the sides of the specimen, causing premature crack closure and hence reducing the local driving force for fatigue crack growth ΔKeff. This distortion of the crack fronts introduces large errors into measurements of crack growth rate and threshold values of ΔK.
Resumo:
The fatigue behaviour in SiC-particulate-reinforced aluminium alloy composites has been briefly reviewed. The improved fatigue life reported in stress-controlled test results from the higher stiffness of the composites; therefore it is generally inferior to monolithic alloys at a constant strain level. The role of SiC particulate reinforcement has been examined for fatigue crack initiation, short-crack growth and long-crack growth. Crack initiation is observed to occur at matrix-SiC interface in cast composites and either at or near the matrix-SiC interface or at cracked SiC particles in powder metallurgy processed composites depending on particle size and morphology. The da/dN vs ΔK relationship in the composites is characterized by crack growth rates existing within a narrow range of ΔK and this is because of the lower fracture toughness and relatively high threshold values in composites compared with those in monolithic alloys. An enhanced Paris region slope attributed to the monotonic fracture contribution are reported and the extent of this contribution is found to depend on particle size. The effects of the aging condition on crack growth rates and particle size dependence of threshold values are also treated in this paper. © 1991.
Resumo:
The fatigue-crack propagation and threshold behaviour of a C-Mn steel containing boron has been investigated at a range of strength levels suitable for mining chain applications. The heat-treatment variables examined include two austenitizing temperatures (900 degree C and 1250 degree C) and a range of tempering treatments from the as-quenched condition to tempering at 400 degree C. In mining applications the haulage chains undergo a 'calibration' process which has the effect of imposing a tensile prestrain on the chain links before they go into service. Prestrain is shown to reduce threshold values in these steels and this behaviour is related to its effects on the residual stress distribution in the test specimens.
Resumo:
In this work we present a quality driven approach to DASH (Dynamic Adaptive Streaming over HTTP) for segment selection in varying network conditions. Current adaption algorithms focus largely on regulating data rates using network layer parameters by selecting the level of quality on offer that can eliminate buffer underrun without considering picture fidelity. In reality, viewers may accept a level of buffer underrun in order to achieve an improved level of picture fidelity. In this case, the conventional DASH algorithms can cause extreme degradation of the picture fidelity when attempting to eliminate buffer underrun with scarce bandwidth availability. Our work is concerned with a quality-aware rate adaption scheme that maximizes the client's quality of experience in terms of both continuity and fidelity (picture quality). Results show that the scheme proposed can maintain a high level of quality for streaming services, especially at low packet loss rates. It is also shown that by eliminating buffer underrun completely, the PSNR that reflects the picture quality of the video is greatly reduced. Our scheme offers the offset between continuity-based quality and resolution-based quality, which can be used to set threshold values for the level of quality desired by clients with different quality requirements. © 2013 IEEE.
Resumo:
A segment selection method controlled by Quality of Experience (QoE) factors for Dynamic Adaptive Streaming over HTTP (DASH) is presented in this paper. Current rate adaption algorithms aim to eliminate buffer underrun events by significantly reducing the code rate when experiencing pauses in replay. In reality, however, viewers may choose to accept a level of buffer underrun in order to achieve an improved level of picture fidelity or to accept the degradation in picture fidelity in order to maintain the service continuity. The proposed rate adaption scheme in our work can maximize the user QoE in terms of both continuity and fidelity (picture quality) in DASH applications. It is shown that using this scheme a high level of quality for streaming services, especially at low packet loss rates, can be achieved. Our scheme can also maintain a best trade-off between continuity-based quality and fidelity-based quality, by determining proper threshold values for the level of quality intended by clients with different quality requirements. In addition, the integration of the rate adaptation mechanism with the scheduling process is investigated in the context of a mobile communication network and related performances are analyzed.
Resumo:
The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.
Resumo:
BACKGROUND: Data about special phenotypes, natural course, and prognostic variables of patients with acquired cold urticaria (ACU) are scarce. OBJECTIVES: We sought to describe the clinical features and disease course of patients with ACU, with special attention paid to particular phenotypes, and to examine possible parameters that could predict the evolution of the disease. METHODS: This study was a retrospective chart review of 74 patients with ACU who visited a tertiary referral center of urticaria between 2005 and 2015. RESULTS: Fourteen patients (18.9%) presented with life-threatening reactions after cold exposure, and 21 (28.4%) showed negative results after cold stimulation tests (classified as atypical ACU). Nineteen patients (25.7%) achieved complete symptoms resolution at the end of the surveillance period and had no subsequent recurrences. Higher rates of atypical ACU along with a lower likelihood of achieving complete symptom resolution was observed in patients who had an onset of symptoms during childhood (P < .05). In patients with atypical ACU, shorter disease duration and lower doses of antihistamines required for achieving disease control were detected (P < .05). Age at disease onset, symptom severity, and cold urticaria threshold values were found to be related to disease evolution (P < .05). LIMITATIONS: This study was limited by its retrospective nature. CONCLUSIONS: The knowledge of the clinical predictors of the disease evolution along with the clinical features of ACU phenotypes would allow for the establishment of an early and proper therapeutic strategy.
Resumo:
Dans le contexte où les routes non revêtues sont susceptibles de subir des charges importantes, une méthode rigoureuse pour la conception de ces chaussées basée sur des principes mécanistes-empiriques et sur le comportement mécanique des sols support est souhaitable. La conception mécaniste combinée à des lois d’endommagement permet l’optimisation des structures de chaussées non revêtues ainsi que la réduction des coûts de construction et d’entretien. Le but de ce projet est donc la mise au point d’une méthode de conception mécaniste-empirique adaptée aux chaussées non revêtues. Il a été question tout d’abord de mettre au point un code de calcul pour la détermination des contraintes et des déformations dans la chaussée. Ensuite, des lois d’endommagement empiriques pour les chaussées non revêtues ont été développées. Enfin, les méthodes de calcul ont permis la création d’abaques de conception. Le développement du code de calcul a consisté en une modélisation de la chaussée par un système élastique multi-couches. La modélisation a été faite en utilisant la transformation d’Odemark et les équations de Boussinesq pour le calcul des déformations sous la charge. L’élaboration des fonctions de transfert empiriques adaptées aux chaussées non revêtues a également été effectuée. Le développement des fonctions de transfert s’est fait en deux étapes. Tout d’abord, l’établissement de valeurs seuil d’orniérage considérant des niveaux jugés raisonnables de conditions fonctionnelle et structurale de la chaussée. Ensuite, le développement de critères de déformation admissible en associant les déformations théoriques calculées à l’aide du code de calcul à l’endommagement observé sur plusieurs routes en service. Les essais ont eu lieu sur des chaussées typiques reconstituées en laboratoire et soumises à un chargement répété par simulateur de charge. Les chaussées ont été instrumentées pour mesurer la déformation au sommet du sol d’infrastructure et les taux d’endommagements ont été mesurés au cours des essais.
Resumo:
Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the applications load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.
Resumo:
In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.
Resumo:
O sedimento representa um importante depósito de contaminantes e uma fonte de contaminação para a cadeia alimentar aquática. Testes de toxicidade usando anfípodos como organismos-teste são empregados para avaliar sedimentos marinhos e estuarinos, juntamente com análises químicas. O presente trabalho tem como objetivo avaliar a qualidade de sedimentos de seis estações situadas no Sistema Estuarino e Portuário de Santos e São Vicente (São Paulo-Brasil), usando testes de toxicidade aguda com sedimento com anfípodos (Tiburonella viscana) e análises químicas de metais, PCB, e PAH. Outros parâmetros do sedimento foram analisados, como carbono orgânico e granulometria. Foram observados níveis de contaminação mais altos na porção interna do estuário onde se localiza o Porto de Santos e a zona industrial. Os testes de toxicidade mostraram resultados adversos significantes para a maioria das amostras testadas, e os sedimentos da porção interna do estuário apresentaram toxicidade mais alta. As análises de componentes principais indicaram uma relação forte entre contaminação do sedimento e toxicidade. As correlações positivas destes fatores nas amostras estudadas foram usadas para estabelecer os pesos das concentrações químicas que estão associadas com os efeitos adversos. Tais análises permitiram estimar valores limiares de efeito para a contaminação de sedimento através de análises multivariadas, identificando os contaminantes associados com o efeito biológico. Estes valores sugeridos são: Cu, 69.0; Pb, 17.4; Zn, 73.3(mg.kg-1); PAHs, 0.5 (mg.kg-1) e PCBs, 0.1 (µg.kg-1).