931 resultados para parametric oscillators and amplifiers
Resumo:
This work considers the reconstruction of strong gravitational lenses from their observed effects on the light distribution of background sources. After reviewing the formalism of gravitational lensing and the most common and relevant lens models, new analytical results on the elliptical power law lens are presented, including new expressions for the deflection, potential, shear and magnification, which naturally lead to a fast numerical scheme for practical calculation. The main part of the thesis investigates lens reconstruction with extended sources by means of the forward reconstruction method, in which the lenses and sources are given by parametric models. The numerical realities of the problem make it necessary to find targeted optimisations for the forward method, in order to make it feasible for general applications to modern, high resolution images. The result of these optimisations is presented in the \textsc{Lensed} algorithm. Subsequently, a number of tests for general forward reconstruction methods are created to decouple the influence of sourced from lens reconstructions, in order to objectively demonstrate the constraining power of the reconstruction. The final chapters on lens reconstruction contain two sample applications of the forward method. One is the analysis of images from a strong lensing survey. Such surveys today contain $\sim 100$ strong lenses, and much larger sample sizes are expected in the future, making it necessary to quickly and reliably analyse catalogues of lenses with a fixed model. The second application deals with the opposite situation of a single observation that is to be confronted with different lens models, where the forward method allows for natural model-building. This is demonstrated using an example reconstruction of the ``Cosmic Horseshoe''. An appendix presents an independent work on the use of weak gravitational lensing to investigate theories of modified gravity which exhibit screening in the non-linear regime of structure formation.
Resumo:
Wir betrachten Systeme von endlich vielen Partikeln, wobei die Partikel sich unabhängig voneinander gemäß eindimensionaler Diffusionen [dX_t = b(X_t),dt + sigma(X_t),dW_t] bewegen. Die Partikel sterben mit positionsabhängigen Raten und hinterlassen eine zufällige Anzahl an Nachkommen, die sich gemäß eines Übergangskerns im Raum verteilen. Zudem immigrieren neue Partikel mit einer konstanten Rate. Ein Prozess mit diesen Eigenschaften wird Verzweigungsprozess mit Immigration genannt. Beobachten wir einen solchen Prozess zu diskreten Zeitpunkten, so ist zunächst nicht offensichtlich, welche diskret beobachteten Punkte zu welchem Pfad gehören. Daher entwickeln wir einen Algorithmus, um den zugrundeliegenden Pfad zu rekonstruieren. Mit Hilfe dieses Algorithmus konstruieren wir einen nichtparametrischen Schätzer für den quadrierten Diffusionskoeffizienten $sigma^2(cdot),$ wobei die Konstruktion im Wesentlichen auf dem Auffüllen eines klassischen Regressionsschemas beruht. Wir beweisen Konsistenz und einen zentralen Grenzwertsatz.
Resumo:
The study was arranged to manifest its objectives through preceding it with an intro-duction. Particular attention was paid in the second part to detect the physical settings of the study area, together with an attempt to show the climatic characteristics in Libya. In the third part, observed temporal and spatial climate change in Libya was investigated through the trends of temperature, precipitation, relative humidity and cloud amount over the peri-ods (1946-2000), (1946-1975), and (1976-2000), comparing the results with the global scales. The forth part detected the natural and human causes of climate change concentrat-ing on the greenhouse effect. The potential impacts of climate change on Libya were ex-amined in the fifth chapter. As a case study, desertification of Jifara Plain was studied in the sixth part. In the seventh chapter, projections and mitigations of climate change and desertification were discussed. Ultimately, the main results and recommendations of the study were summarized. In order to carry through the objectives outlined above, the following methods and approaches were used: a simple linear regression analysis was computed to detect the trends of climatic parameters over time; a trend test based on a trend-to-noise-ratio was applied for detecting linear or non-linear trends; the non-parametric Mann-Kendall test for trend was used to reveal the behavior of the trends and their significance; PCA was applied to construct the all-Libya climatic parameters trends; aridity index after Walter-Lieth was shown for computing humid respectively arid months in Libya; correlation coefficient, (after Pearson) for detecting the teleconnection between sun spot numbers, NAOI, SOI, GHGs, and global warming, climate changes in Libya; aridity index, after De Martonne, to elaborate the trends of aridity in Jifara Plain; Geographical Information System and Re-mote Sensing techniques were applied to clarify the illustrations and to monitor desertifi-cation of Jifara Plain using the available satellite images MSS, TM, ETM+ and Shuttle Radar Topography Mission (SRTM). The results are explained by 88 tables, 96 figures and 10 photos. Temporal and spatial temperature changes in Libya indicated remarkably different an-nual and seasonal trends over the long observation period 1946-2000 and the short obser-vation periods 1946-1975 and 1976-2000. Trends of mean annual temperature were posi-tive at all study stations except at one from 1946-2000, negative trends prevailed at most stations from 1946-1975, while strongly positive trends were computed at all study stations from 1976-2000 corresponding with the global warming trend. Positive trends of mean minimum temperatures were observed at all reference stations from 1946-2000 and 1976-2000, while negative trends prevailed at most stations over the period 1946-1975. For mean maximum temperature, positive trends were shown from 1946-2000 and from 1976-2000 at most stations, while most trends were negative from 1946-1975. Minimum tem-peratures increased at nearly more than twice the rate of maximum temperatures at most stations. In respect of seasonal temperature, warming mostly occurred in summer and au-tumn in contrast to the global observations identifying warming mostly in winter and spring in both study periods. Precipitation across Libya is characterized by scanty and sporadically totals, as well as high intensities and very high spatial and temporal variabilities. From 1946-2000, large inter-annual and intra-annual variabilities were observed. Positive trends of annual precipi-tation totals have been observed from 1946-2000, negative trends from 1976-2000 at most stations. Variabilities of seasonal precipitation over Libya are more strikingly experienced from 1976-2000 than from 1951-1975 indicating a growing magnitude of climate change in more recent times. Negative trends of mean annual relative humidity were computed at eight stations, while positive trends prevailed at seven stations from 1946-2000. For the short observation period 1976-2000, positive trends were computed at most stations. Annual cloud amount totals decreased at most study stations in Libya over both long and short periods. Re-markably large spatial variations of climate changes were observed from north to south over Libya. Causes of climate change were discussed showing high correlation between tempera-ture increasing over Libya and CO2 emissions; weakly positive correlation between pre-cipitation and North Atlantic Oscillation index; negative correlation between temperature and sunspot numbers; negative correlation between precipitation over Libya and Southern Oscillation Index. The years 1992 and 1993 were shown as the coldest in the 1990s result-ing from the eruption of Mount Pinatubo, 1991. Libya is affected by climate change in many ways, in particular, crop production and food security, water resources, human health, population settlement and biodiversity. But the effects of climate change depend on its magnitude and the rate with which it occurs. Jifara Plain, located in northwestern Libya, has been seriously exposed to desertifica-tion as a result of climate change, landforms, overgrazing, over-cultivation and population growth. Soils have been degraded, vegetation cover disappeared and the groundwater wells were getting dry in many parts. The effect of desertification on Jifara Plain appears through reducing soil fertility and crop productivity, leading to long-term declines in agri-cultural yields, livestock yields, plant standing biomass, and plant biodiversity. Desertifi-cation has also significant implications on livestock industry and the national economy. Desertification accelerates migration from rural and nomadic areas to urban areas as the land cannot support the original inhabitants. In the absence of major shifts in policy, economic growth, energy prices, and con-sumer trends, climate change in Libya and desertification of Jifara Plain are expected to continue in the future. Libya cooperated with United Nations and other international organizations. It has signed and ratified a number of international and regional agreements which effectively established a policy framework for actions to mitigate climate change and combat deserti-fication. Libya has implemented several laws and legislative acts, with a number of ancil-lary and supplementary rules to regulate. Despite the current efforts and ongoing projects being undertaken in Libya in the field of climate change and desertification, urgent actions and projects are needed to mitigate climate change and combat desertification in the near future.
Resumo:
BACKGROUND: Chlorhexidine (CHX) rinsing after periodontal surgery is common. We assessed the clinical and microbiological effects of two CHX concentrations following periodontal surgery. MATERIALS AND METHODS: In a randomized, controlled clinical trial, 45 subjects were assigned to 4 weeks rinsing with a 0.05 CHX/herbal extract combination (test) or a 0.1% CHX solution. Clinical and staining effects were studied. Subgingival bacteria were assessed using the DNA-DNA checkerboard. Statistics included parametric and non-parametric tests (p<0001 to declare significance at 80% power). RESULTS: At weeks 4 and 12, more staining was found in the control group (p<0.05 and p<0.001, respectively). A higher risk for staining was found in the control group (crude OR: 2.3:1, 95% CI: 1.3 to 4.4, p<0.01). The absolute staining reduction in the test group was 21.1% (9 5% CI: 9.4-32.8%). Probing pocket depth (PPD) decreases were significant (p<0.001) in both groups and similar (p=0.92). No rinse group differences in changes of bacterial counts for any species were found between baseline and week 12. CONCLUSIONS: The test CHX rinse resulted in less tooth staining. At the study endpoint, similar and high counts of periodontal pathogens were found.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
The present study examined the neural basis of vivid motor imagery with parametrical functional magnetic resonance imaging. 22 participants performed motor imagery (MI) of six different right-hand movements that differed in terms of pointing accuracy needs and object involvement, i.e., either none, two big or two small squares had to be pointed at in alternation either with or without an object grasped with the fingers. After each imagery trial, they rated the perceived vividness of motor imagery on a 7-point scale. Results showed that increased perceived imagery vividness was parametrically associated with increasing neural activation within the left putamen, the left premotor cortex (PMC), the posterior parietal cortex of the left hemisphere, the left primary motor cortex, the left somatosensory cortex, and the left cerebellum. Within the right hemisphere, activation was found within the right cerebellum, the right putamen, and the right PMC. It is concluded that the perceived vividness of MI is parametrically associated with neural activity within sensorimotor areas. The results corroborate the hypothesis that MI is an outcome of neural computations based on movement representations located within motor areas.
Resumo:
OBJECTIVES: Donation after circulatory declaration of death (DCDD) could significantly improve the number of cardiac grafts for transplantation. Graft evaluation is particularly important in the setting of DCDD given that conditions of cardio-circulatory arrest and warm ischaemia differ, leading to variable tissue injury. The aim of this study was to identify, at the time of heart procurement, means to predict contractile recovery following cardioplegic storage and reperfusion using an isolated rat heart model. Identification of reliable approaches to evaluate cardiac grafts is key in the development of protocols for heart transplantation with DCDD. METHODS: Hearts isolated from anaesthetized male Wistar rats (n = 34) were exposed to various perfusion protocols. To simulate DCDD conditions, rats were exsanguinated and maintained at 37°C for 15-25 min (warm ischaemia). Isolated hearts were perfused with modified Krebs-Henseleit buffer for 10 min (unloaded), arrested with cardioplegia, stored for 3 h at 4°C and then reperfused for 120 min (unloaded for 60 min, then loaded for 60 min). Left ventricular (LV) function was assessed using an intraventricular micro-tip pressure catheter. Statistical significance was determined using the non-parametric Spearman rho correlation analysis. RESULTS: After 120 min of reperfusion, recovery of LV work measured as developed pressure (DP)-heart rate (HR) product ranged from 0 to 15 ± 6.1 mmHg beats min(-1) 10(-3) following warm ischaemia of 15-25 min. Several haemodynamic parameters measured during early, unloaded perfusion at the time of heart procurement, including HR and the peak systolic pressure-HR product, correlated significantly with contractile recovery after cardioplegic storage and 120 min of reperfusion (P < 0.001). Coronary flow, oxygen consumption and lactate dehydrogenase release also correlated significantly with contractile recovery following cardioplegic storage and 120 min of reperfusion (P < 0.05). CONCLUSIONS: Haemodynamic and biochemical parameters measured at the time of organ procurement could serve as predictive indicators of contractile recovery. We believe that evaluation of graft suitability is feasible prior to transplantation with DCDD, and may, consequently, increase donor heart availability.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
OBJECTIVES: This experiment was performed to evaluate clinically and histologically the effect of mechanical therapy with or without antiseptic therapy on peri-implant mucositis lesions in nine cynomolgus monkeys. MATERIAL AND METHODS: Two ITI titanium implants were inserted into each side of the mandibles. After 90 days of plaque control and soft tissue healing, a baseline clinical examination was completed. Peri-implant lesions were induced by placing silk ligatures and allowing plaque to accumulate for 6 weeks. The clinical examination was then repeated, and the monkeys were randomly assigned to three treatment groups: group A, mechanical cleansing only; group B, mechanical cleansing and local irrigation with 0.12% chlorhexidine (CHX) and application of 0.2% CHX gel; and group C, control, no treatment. The implants in treatment groups A and B were treated and maintained according to the assigned treatment for two additional months. At the end of the maintenance period, a final clinical examination was performed and the animals were sacrificed for biopsies. RESULTS: The mean probing depths (PD) values at mucositis were: 3.5, 3.7, and 3.4 mm, and clinical attachment level (CAL) = 3.8, 4.1, and 3.9 mm for treatment groups A, B and C, respectively. The corresponding values after treatment were: PD = 1.7, 2.1, and 2.5 mm, and CAL=2.6, 2.6, and 3.1 mm. ANOVA of mean changes (Delta) in PD and CAL after treatment showed no statistical difference between the treatment groups. Comparison of the mean changes in PD and CAL after treatment yielded statistical differences between the control and treatment groups P < 0.01. According to the t-test, no statistical difference was found between treatment groups A and B for the PD reduction but there was a significant difference for the CAL change, P < 0.03. Group A had significantly more recession and less CAL gain than group B. Non-parametric tests yielded no significant differences in modified plaque index (mPlI) and gingival index (GI) after treatment between both treatment groups. Frequencies and percent distributions of the mPlI and GI scores changed considerably for both treatment groups when compared with the changes in the control group after treatment. With regard to the histological evaluation, no statistical differences existed between the treatments for any linear measurement. The proportion of inflammation found in the mucosal tissues of the control implants was greater than the one found for both treatment groups, P < 0.01. More importantly, both treatment groups showed a similar low proportion of inflammation after 2 months of treatment. CONCLUSIONS: Within the limitations of this experiment, and considering the supportive plaque control rendered, it can be concluded that for pockets of 3-4 mm: (1) mechanical therapy alone or combined with CHX results in the clinical resolution of peri-implant mucositis lesions, (2) histologically, both treatments result in minimal inflammation compatible with health, and (3) the mechanical effect alone is sufficient to achieve clinical and histologic resolution of mucositis lesions.
Resumo:
In recent years, researchers in the health and social sciences have become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of an exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Natural direct and indirect effects are of particular interest as they generally combine to produce the total effect of the exposure and therefore provide insight on the mechanism by which it operates to produce the outcome. A semiparametric theory has recently been proposed to make inferences about marginal mean natural direct and indirect effects in observational studies (Tchetgen Tchetgen and Shpitser, 2011), which delivers multiply robust locally efficient estimators of the marginal direct and indirect effects, and thus generalizes previous results for total effects to the mediation setting. In this paper we extend the new theory to handle a setting in which a parametric model for the natural direct (indirect) effect within levels of pre-exposure variables is specified and the model for the observed data likelihood is otherwise unrestricted. We show that estimation is generally not feasible in this model because of the curse of dimensionality associated with the required estimation of auxiliary conditional densities or expectations, given high-dimensional covariates. We thus consider multiply robust estimation and propose a more general model which assumes a subset but not all of several working models holds.