949 resultados para Efficiency of cleaning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microcredit has been a tool to alleviate poverty since long. This research is aimed to observe the efficiency of microcredit in the field of social exclusion. The development of questionnaires and use of existing tools was used to observe the tangible and intangible intertwining of microcredit and by doing so the effort was concentrated to observe whether microcredit has a direct effect on social exclusion or not. Bangladesh was chosen for the field study and 85 samples were taken for the analysis. It is a time period research and one year time was set to receive the sample and working on the statistical analysis. The tangible aspect was based on a World Bank questionnaire and the social capital questionnaire was developed through different well observed tools. The borrowers of Grameen Bank in Bangladesh, is the research sample whish shows a strong correlation between their tangible activity and social life. There are significant changes in tangible aspect and social participation observed from the research. Strong correlation between the two aspects was also found taking into account that the borrowers themselves have a vibrant social life in the village.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we studied the efficiency of the benchmarks used in the asset management industry. In chapter 2 we analyzed the efficiency of the benchmark used for the government bond markets. We found that for the Emerging Market Bonds an equally weighted index for the country weights is probably the more suited because guarantees maximum diversification of country risk but for the Eurozone government bond market we found a GDP weighted index is better because the most important matter is to avoid a higher weight for highly indebted countries. In chapter 3 we analyzed the efficiency of a Derivatives Index to invest in the European corporate bond market instead of a Cash Index. We can state that the two indexes are similar in terms of returns, but that the Derivatives Index is less risky because it has a lower volatility, has values of skewness and kurtosis closer to those of a normal distribution and is a more liquid instrument, as the autocorrelation is not significant. In chapter 4 it is analyzed the impact of fallen angels on the corporate bond portfolios. Our analysis investigated the impact of the month-end rebalancing of the ML Emu Non Financial Corporate Index for the exit of downgraded bond (the event). We can conclude a flexible approach to the month-end rebalancing is better in order to avoid a loss of valued due to the benchmark construction rules. In chapter 5 we did a comparison between the equally weighted and capitalization weighted method for the European equity market. The benefit which results from reweighting the portfolio into equal weights can be attributed to the fact that EW portfolios implicitly follow a contrarian investment strategy, because they mechanically rebalance away from stocks that increase in price.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diameters of traditional dish concentrators can reach several tens of meters, the construction of monolithic mirrors being difficult at these scales: cheap flat reflecting facets mounted on a common frame generally reproduce a paraboloidal surface. When a standard imaging mirror is coupled with a PV dense array, problems arise since the solar image focused is intrinsically circular. Moreover, the corresponding irradiance distribution is bell-shaped in contrast with the requirement of having all the cells under the same illumination. Mismatch losses occur when interconnected cells experience different conditions, in particular in series connections. In this PhD Thesis, we aim at solving these issues by a multidisciplinary approach, exploiting optical concepts and applications developed specifically for astronomical use, where the improvement of the image quality is a very important issue. The strategy we propose is to boost the spot uniformity acting uniquely on the primary reflector and avoiding the big mirrors segmentation into numerous smaller elements that need to be accurately mounted and aligned. In the proposed method, the shape of the mirrors is analytically described by the Zernike polynomials and its optimization is numerically obtained to give a non-imaging optics able to produce a quasi-square spot, spatially uniform and with prescribed concentration level. The freeform primary optics leads to a substantial gain in efficiency without secondary optics. Simple electrical schemes for the receiver are also required. The concept has been investigated theoretically modeling an example of CPV dense array application, including the development of non-optical aspects as the design of the detector and of the supporting mechanics. For the method proposed and the specific CPV system described, a patent application has been filed in Italy with the number TO2014A000016. The patent has been developed thanks to the collaboration between the University of Bologna and INAF (National Institute for Astrophysics).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sweet sorghum, a C4 crop of tropical origin, is gaining momentum as a multipurpose feedstock to tackle the growing environmental, food and energy security demands. Under temperate climates sweet sorghum is considered as a potential bioethanol feedstock, however, being a relatively new crop in such areas its physiological and metabolic adaptability has to be evaluated; especially to the more frequent and severe drought spells occurring throughout the growing season and to the cold temperatures during the establishment period of the crop. The objective of this thesis was to evaluate some adaptive photosynthetic traits of sweet sorghum to drought and cold stress, both under field and controlled conditions. To meet such goal, a series of experiments were carried out. A new cold-tolerant sweet sorghum genotype was sown in rhizotrons of 1 m3 in order to evaluate its tolerance to progressive drought until plant death at young and mature stages. Young plants were able to retain high photosynthetic rate for 10 days longer than mature plants. Such response was associated to the efficient PSII down-regulation capacity mediated by light energy dissipation, closure of reaction centers (JIP-test parameters), and accumulation of glucose and sucrose. On the other hand, when sweet sorghum plants went into blooming stage, neither energy dissipation nor sugar accumulation counteracted the negative effect of drought. Two hybrids with contrastable cold tolerance, selected from an early sowing field trial were subjected to chilling temperatures under controlled growth conditions to evaluate in deep their physiological and metabolic cold adaptation mechanisms. The hybrid which poorly performed under field conditions (ICSSH31), showed earlier metabolic changes (Chl a + b, xanthophyll cycle) and greater inhibition of enzymatic activity (Rubisco and PEPcase activity) than the cold tolerant hybrid (Bulldozer). Important insights on the potential adaptability of sweet sorghum to temperate climates are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Während der letzten Jahre wurde für Spinfilter-Detektoren ein wesentlicher Schritt in Richtung stark erhöhter Effizienz vollzogen. Das ist eine wichtige Voraussetzung für spinaufgelöste Messungen mit Hilfe von modernen Elektronensp ektrometern und Impulsmikroskopen. In dieser Doktorarbeit wurden bisherige Arbeiten der parallel abbildenden Technik weiterentwickelt, die darauf beruht, dass ein elektronenoptisches Bild unter Ausnutzung der k-parallel Erhaltung in der Niedrigenergie-Elektronenbeugung auch nach einer Reflektion an einer kristallinen Oberfläche erhalten bleibt. Frühere Messungen basierend auf der spekularen Reflexion an einerrnW(001) Oberfläche [Kolbe et al., 2011; Tusche et al., 2011] wurden auf einenrnviel größeren Parameterbereich erweitert und mit Ir(001) wurde ein neues System untersucht, welches eine sehr viel längere Lebensdauer der gereinigten Kristalloberfläche im UHV aufweist. Die Streuenergie- und Einfallswinkel-“Landschaft” der Spinempfindlichkeit S und der Reflektivität I/I0 von gestreuten Elektronen wurde im Bereich von 13.7 - 36.7 eV Streuenergie und 30◦ - 60◦ Streuwinkel gemessen. Die dazu neu aufgebaute Messanordnung umfasst eine spinpolarisierte GaAs Elektronenquellernund einen drehbaren Elektronendetektor (Delayline Detektor) zur ortsauflösenden Detektion der gestreuten Elektronen. Die Ergebnisse zeigen mehrere Regionen mit hoher Asymmetrie und großem Gütefaktor (figure of merit FoM), definiert als S2 · I/I0. Diese Regionen eröffnen einen Weg für eine deutliche Verbesserung der Vielkanal-Spinfiltertechnik für die Elektronenspektroskopie und Impulsmikroskopie. Im praktischen Einsatz erwies sich die Ir(001)-Einkristalloberfläche in Bezug auf längere Lebensdauer im UHV (ca. 1 Messtag), verbunden mit hoher FOM als sehr vielversprechend. Der Ir(001)-Detektor wurde in Verbindung mit einem Halbkugelanalysator bei einem zeitaufgelösten Experiment im Femtosekunden-Bereich am Freie-Elektronen-Laser FLASH bei DESY eingesetzt. Als gute Arbeitspunkte erwiesen sich 45◦ Streuwinkel und 39 eV Streuenergie, mit einer nutzbaren Energiebreite von 5 eV, sowie 10 eV Streuenergie mit einem schmaleren Profil von < 1 eV aber etwa 10× größerer Gütefunktion. Die Spinasymmetrie erreicht Werte bis 70 %, was den Einfluss von apparativen Asymmetrien deutlich reduziert. Die resultierende Messungen und Energie-Winkel-Landschaft zeigt recht gute Übereinstimmung mit der Theorie (relativistic layer-KKR SPLEED code [Braun et al., 2013; Feder et al.,rn2012])

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to assess a pharmacokinetic algorithm to predict ketamine plasma concentration and drive a target-controlled infusion (TCI) in ponies. Firstly, the algorithm was used to simulate the course of ketamine enantiomers plasma concentrations after the administration of an intravenous bolus in six ponies based on individual pharmacokinetic parameters obtained from a previous experiment. Using the same pharmacokinetic parameters, a TCI of S-ketamine was then performed over 120 min to maintain a concentration of 1 microg/mL in plasma. The actual plasma concentrations of S-ketamine were measured from arterial samples using capillary electrophoresis. The performance of the simulation for the administration of a single bolus was very good. During the TCI, the S-ketamine plasma concentrations were maintained within the limit of acceptance (wobble and divergence <20%) at a median of 79% (IQR, 71-90) of the peak concentration reached after the initial bolus. However, in three ponies the steady concentrations were significantly higher than targeted. It is hypothesized that an inaccurate estimation of the volume of the central compartment is partly responsible for that difference. The algorithm allowed good predictions for the single bolus administration and an appropriate maintenance of constant plasma concentrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recombinant human tumour necrosis factor (TNF) has a selective effect on angiogenic vessels in tumours. Given that it induces vasoplegia, its clinical use has been limited to administration through isolated limb perfusion (ILP) for regionally advanced melanomas and soft tissue sarcomas of the limbs. When combined with the alkylating agent melphalan, a single ILP produces a very high objective response rate. In melanoma, the complete response (CR) rate is around 80% and the overall objective response rate greater than 90%. In soft tissue sarcomas that are inextirpable, ILP is a neoadjuvant treatment resulting in limb salvage in 80% of the cases. The CR rate averages 20% and the objective response rate is around 80%. The mode of action of TNF-based ILP involves two distinct and successive effects on the tumour-associated vasculature: first, an increase in endothelium permeability leading to improved chemotherapy penetration within the tumour tissue, and second, a selective killing of angiogenic endothelial cells resulting in tumour vessel destruction. The mechanism whereby these events occur involves rapid (of the order of minutes) perturbation of cell-cell adhesive junctions and inhibition of alphavbeta3 integrin signalling in tumour-associated vessels, followed by massive death of endothelial cells and tumour vascular collapse 24 hours later. New, promising approaches for the systemic use of TNF in cancer therapy include TNF targeting by means of single chain antibodies or endothelial cell ligands, or combined administration with drugs perturbing integrin-dependent signalling and sensitizing angiogenic endothelial cells to TNF-induced death.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider nonparametric missing data models for which the censoring mechanism satisfies coarsening at random and which allow complete observations on the variable X of interest. W show that beyond some empirical process conditions the only essential condition for efficiency of an NPMLE of the distribution of X is that the regions associated with incomplete observations on X contain enough complete observations. This is heuristically explained by describing the EM-algorithm. We provide identifiably of the self-consistency equation and efficiency of the NPMLE in order to make this statement rigorous. The usual kind of differentiability conditions in the proof are avoided by using an identity which holds for the NPMLE of linear parameters in convex models. We provide a bivariate censoring application in which the condition and hence the NPMLE fails, but where other estimators, not based on the NPMLE principle, are highly inefficient. It is shown how to slightly reduce the data so that the conditions hold for the reduced data. The conditions are verified for the univariate censoring, double censored, and Ibragimov-Has'minski models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method is given for proving efficiency of NPMLE directly linked to empirical process theory. The conditions in general are appropriate consistency of the NPMLE, differentiability of the model, differentiability of the parameter of interest, local convexity of the parameter space, and a Donsker class condition for the class of efficient influence functions obtained by varying the parameters. For the case that the model is linear in the parameter and the parameter space is convex, as with most nonparametric missing data models, we show that the method leads to an identity for the NPMLE which almost says that the NPMLE is efficient and provides us straightforwardly with a consistency and efficiency proof. This identify is extended to an almost linear class of models which contain biased sampling models. To illustrate, the method is applied to the univariate censoring model, random truncation models, interval censoring case I model, the class of parametric models and to a class of semiparametric models.