941 resultados para Series Summation Method
Resumo:
The adaptive thermal comfort theory considers people as active rather than passive recipients in response to ambient physical thermal stimuli, in contrast with conventional, heat-balance-based, thermal comfort theory. Occupants actively interact with the environments they occupy by means of utilizing adaptations in terms of physiological, behavioural and psychological dimensions to achieve ‘real world’ thermal comfort. This paper introduces a method of quantifying the physiological, behavioural and psychological portions of the adaptation process by using the analytic hierarchy process (AHP) based on the case studies conducted in the UK and China. Apart from three categories of adaptations which are viewed as criteria, six possible alternatives are considered: physiological indices/health status, the indoor environment, the outdoor environment, personal physical factors, environmental control and thermal expectation. With the AHP technique, all the above-mentioned criteria, factors and corresponding elements are arranged in a hierarchy tree and quantified by using a series of pair-wise judgements. A sensitivity analysis is carried out to improve the quality of these results. The proposed quantitative weighting method provides researchers with opportunities to better understand the adaptive mechanisms and reveal the significance of each category for the achievement of adaptive thermal comfort.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
Following a malicious or accidental atmospheric release in an outdoor environment it is essential for first responders to ensure safety by identifying areas where human life may be in danger. For this to happen quickly, reliable information is needed on the source strength and location, and the type of chemical agent released. We present here an inverse modelling technique that estimates the source strength and location of such a release, together with the uncertainty in those estimates, using a limited number of measurements of concentration from a network of chemical sensors considering a single, steady, ground-level source. The technique is evaluated using data from a set of dispersion experiments conducted in a meteorological wind tunnel, where simultaneous measurements of concentration time series were obtained in the plume from a ground-level point-source emission of a passive tracer. In particular, we analyze the response to the number of sensors deployed and their arrangement, and to sampling and model errors. We find that the inverse algorithm can generate acceptable estimates of the source characteristics with as few as four sensors, providing these are well-placed and that the sampling error is controlled. Configurations with at least three sensors in a profile across the plume were found to be superior to other arrangements examined. Analysis of the influence of sampling error due to the use of short averaging times showed that the uncertainty in the source estimates grew as the sampling time decreased. This demonstrated that averaging times greater than about 5min (full scale time) lead to acceptable accuracy.
Resumo:
Shiga toxin producing Escherichia coli (STEC) strains are foodborne pathogens whose ability to produce Shiga toxin (Stx) is due to the integration of Stx-encoding lambdoid bacteriophage (Stx phage). Circulating, infective Stx phages are very difficult to isolate, purify and propagate such that there is no information on their genetic composition and properties. Here we describe a novel approach that exploits the phage's ability to infect their host and form a lysogen, thus enabling purification of Stx phages by a series of sequential lysogen isolation and induction steps. A total of 15 Stx phages were rigorously purified from water samples in this way, classified by TEM and genotyped using a PCR-based multi-loci characterisation system. Each phage possessed only one variant of each target gene type, thus confirming its purity, with 9 of the 15 phages possessing a short tail-spike gene and identified by TEM as Podoviridae. The remaining 6 phages possessed long tails, four of which appeared to be contractile in nature (Myoviridae) and two of which were morphologically very similar to bacteriophage lambda (Siphoviridae).
Resumo:
The Fourier series can be used to describe periodic phenomena such as the one-dimensional crystal wave function. By the trigonometric treatements in Hückel theory it is shown that Hückel theory is a special case of Fourier series theory. Thus, the conjugated π system is in fact a periodic system. Therefore, it can be explained why such a simple theorem as Hückel theory can be so powerful in organic chemistry. Although it only considers the immediate neighboring interactions, it implicitly takes account of the periodicity in the complete picture where all the interactions are considered. Furthermore, the success of the trigonometric methods in Hückel theory is not accidental, as it based on the fact that Hückel theory is a specific example of the more general method of Fourier series expansion. It is also important for education purposes to expand a specific approach such as Hückel theory into a more general method such as Fourier series expansion.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.
Resumo:
Many key economic and financial series are bounded either by construction or through policy controls. Conventional unit root tests are potentially unreliable in the presence of bounds, since they tend to over-reject the null hypothesis of a unit root, even asymptotically. So far, very little work has been undertaken to develop unit root tests which can be applied to bounded time series. In this paper we address this gap in the literature by proposing unit root tests which are valid in the presence of bounds. We present new augmented Dickey–Fuller type tests as well as new versions of the modified ‘M’ tests developed by Ng and Perron [Ng, S., Perron, P., 2001. LAG length selection and the construction of unit root tests with good size and power. Econometrica 69, 1519–1554] and demonstrate how these tests, combined with a simulation-based method to retrieve the relevant critical values, make it possible to control size asymptotically. A Monte Carlo study suggests that the proposed tests perform well in finite samples. Moreover, the tests outperform the Phillips–Perron type tests originally proposed in Cavaliere [Cavaliere, G., 2005. Limited time series with a unit root. Econometric Theory 21, 907–945]. An illustrative application to U.S. interest rate data is provided
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
Inhibition of microtubule function is an attractive rational approach to anticancer therapy. Although taxanes are the most prominent among the microtubule-stabilizers, their clinical toxicity, poor pharmacokinetic properties, and resistance have stimulated the search for new antitumor agents having the same mechanism of action. Discodermolide is an example of nontaxane natural product that has the same mechanism of action, demonstrating superior antitumor efficacy and therapeutic index. The extraordinary chemical and biological properties have qualified discodermolide as a lead structure for the design of novel anticancer agents with optimized therapeutic properties. In the present work, we have employed a specialized fragment-based method to develop robust quantitative structure - activity relationship models for a series of synthetic discodermolide analogs. The generated molecular recognition patterns were combined with three-dimensional molecular modeling studies as a fundamental step on the path to understanding the molecular basis of drug-receptor interactions within this important series of potent antitumoral agents.
Resumo:
Migrastatin, a macrolide natural product, and its structurally related analogs are potent inhibitors of cancer cell metastasis, invasion and migration. In the present work, a specialized fragment-based method was employed to develop QSAR models for a series of migrastatin and isomigrastatin analogs. Significant correlation coefficients were obtained (best model, q(2) = 0.76 and r(2) = 0.91) indicating that the QSAR models possess high internal consistency. The best model was then used to predict the potency of an external test set, and the predicted values were in good agreement with the experimental results (R(2) (pred) = 0.85). The final model and the corresponding contribution maps, combined with molecular modeling studies, provided important insights into the key structural features for the anticancer activity of this family of synthetic compounds based on natural products.
Resumo:
Several protease inhibitors have reached the world market in the last fifteen years, dramatically improving the quality of life and life expectancy of millions of HIV-infected patients. In spite of the tremendous research efforts in this area, resistant HIV-1 variants are constantly decreasing the ability of the drugs to efficiently inhibit the enzyme. As a consequence, inhibitors with novel frameworks are necessary to circumvent resistance to chemotherapy. In the present work, we have created 3D QSAR models for a series of 82 HIV-1 protease inhibitors employing the comparative molecular field analysis (CoMFA) method. Significant correlation coefficients were obtained (q(2) = 0.82 and r(2) = 0.97), indicating the internal consistency of the best model, which was then used to evaluate an external test set containing 17 compounds. The predicted values were in good agreement with the experimental results, showing the robustness of the model and its substantial predictive power for untested compounds. The final QSAR model and the information gathered from the CoMFA contour maps should be useful for the design of novel anti-HIV agents with improved potency.
Resumo:
Metal cation toxicity to basidiomycete fungi is poorly understood, despite its well-known importance in terrestrial ecosystems. Moreover, there is no reported methodology for the routine evaluation of metal toxicity to basidiomycetes. In the present study, we describe the development of a procedure to assess the acute toxicity of metal cations (Na(+), K(+), Li(+), Ca(2+), Mg(2+), Co(2+), Zn(2+), Ni(2+), Mn(2+), Cd(2+), and Cu(2+)) to the bioluminescent basidiomycete fungus Gerronema viridilucens. The method is based on the decrease in the intensity of bioluminescence resulting from injuries sustained by the fungus mycelium exposed to either essential or nonessential metal toxicants. The assay described herein enables LIS to propose a metal toxicity series to Gerronenia viridilucens based on data obtained from the bioluminescence intensity (median effective concentration [EC50] values) versus metal concentration: Cd(2+) > Cu(2+) > Mn(2+) approximate to Ni(2+) approximate to Co(2+) > Zn(2+) > Mg(2+) > Li(+) > K(+) approximate to Na(+) > Ca(2+), and to shed some li-ht on the mechanism of toxic action of metal cations to basidiomycete fungi. Environ. Toxicol. Chem. 2010;29:320-326. (C) 2009 SETAC
Resumo:
This work aims at combining the Chaos theory postulates and Artificial Neural Networks classification and predictive capability, in the field of financial time series prediction. Chaos theory, provides valuable qualitative and quantitative tools to decide on the predictability of a chaotic system. Quantitative measurements based on Chaos theory, are used, to decide a-priori whether a time series, or a portion of a time series is predictable, while Chaos theory based qualitative tools are used to provide further observations and analysis on the predictability, in cases where measurements provide negative answers. Phase space reconstruction is achieved by time delay embedding resulting in multiple embedded vectors. The cognitive approach suggested, is inspired by the capability of some chartists to predict the direction of an index by looking at the price time series. Thus, in this work, the calculation of the embedding dimension and the separation, in Takens‘ embedding theorem for phase space reconstruction, is not limited to False Nearest Neighbor, Differential Entropy or other specific method, rather, this work is interested in all embedding dimensions and separations that are regarded as different ways of looking at a time series by different chartists, based on their expectations. Prior to the prediction, the embedded vectors of the phase space are classified with Fuzzy-ART, then, for each class a back propagation Neural Network is trained to predict the last element of each vector, whereas all previous elements of a vector are used as features.
Resumo:
The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton's method and can be applied to a wider variety of problems. It also converges when the objective function is non-differentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter.
Resumo:
Trabalho apresentado no XXXV CNMAC, Natal-RN, 2014.