965 resultados para regularly entered default judgment set aside without costs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Across the industry, regardless of the activity which is intended, the electricity distribution must meet the current and ever growing needs of the market, aiming at reliability and process efficiency. The energy must not only be available to ensure continuity of operation, but also to avoid the costs incurred due to deficiencies and failures. The tendency to migrate to intelligent systems is undeniable and this thesis will be analyzed the advantages that made this kind technology essential, focused on the analysis of the motor control center and as sturdy equipment fit to the concept of intelligent panels. The case study compares in a real scenario the acquisition of a system of low-voltage electrical panels comparing the cost to purchase the same set of panels made with and without the concept of intelligence

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. To evaluate achromatic contrast sensitivity (CS) with magnocellular-(M) and parvocellular-(P) probing stimuli in type 2 diabetics, with (DR) or without (NDR) nonproliferative retinopathy. METHODS. Inferred M-and P-dominated responses were assessed with a modified version of the steady-/pulsed-pedestal paradigm (SP/PP) applied in 26 NDR (11 male; mean age, 55 +/- 9 years; disease duration, 5 +/- 4 years); 19 DR (6 male; mean age, 58 +/- 7 years; disease duration = 9 +/- 6 years); and 18 controls (CTRL; 12 male; mean age, 55 +/- 10 years). Thresholds were measured with pedestals at 7, 12, and 19 cd/m(2), and increment durations of 17 and 133 ms. The thresholds from the two stimulus durations were used to estimate critical durations (Tc) for each data set. RESULTS. Both DR and NDR patients had significant reduction in CS in both SP and PP paradigms in relation to CTRL (Kruskal-Wallis, P < 0.01). Patients` critical duration estimates for either paradigm were not significantly different from CTRL. CONCLUSIONS. The significant reduction of CS in both paradigms is consistent with losses of CS in both M and P pathways. The CS losses were not accompanied by losses in temporal processing speed in either diabetic group. Significant CS loss in the group without retinopathy reinforces the notion that neural changes associated with the cellular and functional visual loss may play an important role in the etiology of diabetic visual impairment. In addition, the results show that the SP/PP paradigm provides an additional tool for detection and characterization of the early functional damage due to diabetes. (Invest Ophthalmol Vis Sci. 2011; 52:1151-1155) DOI:10.1167/iovs.09-3705

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The combined effect of diabetes and stroke on disability and mortality remains largely unexplored in Brazil and Latin America. Previous studies have been based primarily on data from developed countries. This study addresses the empirical gap by evaluating the combined impact of diabetes and stroke on disability and mortality in Brazil. Methods: The sample was drawn from two waves of the Survey on Health and Well-being of the Elderly, which followed 2,143 older adults in Sao Paulo, Brazil, from 2000 to 2006. Disability was assessed via measures of activities of daily living (ADL) limitations, severe ADL limitations, and receiving assistance to perform these activities. Logistic and multinomial regression models controlling for sociodemographic and health conditions were used to address the influence of diabetes and stroke on disability and mortality. Results: By itself, the presence of diabetes did not increase the risk of disability or the need for assistance; however, diabetes was related to increased risks when assessed in combination with stroke. After controlling for demographic, social and health conditions, individuals who had experienced stroke but not diabetes were 3.4 times more likely to have ADL limitations than those with neither condition (95% CI 2.26-5.04). This elevated risk more than doubled for those suffering from a combination of diabetes and stroke (OR 7.34, 95% CI 3.73-14.46). Similar effects from the combination of diabetes and stroke were observed for severe ADL limitations (OR 19.75, 95% CI 9.81-39.76) and receiving ADL assistance (OR 16.57, 95% CI 8.39-32.73). Over time, older adults who had experienced a stroke were at higher risk of remaining disabled (RRR 4.28, 95% CI 1.53, 11.95) and of mortality (RRR 3.42, 95% CI 1.65, 7.09). However, risks were even higher for those who had experienced both diabetes and stroke. Diabetes was associated with higher mortality. Conclusions: Findings indicate that a combined history of stroke and diabetes has a great impact on disability prevalence and mortality among older adults in Sao Paulo, Brazil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Determination of the SET protein levels in head and neck squamous cell carcinoma (HNSCC) tissue samples and the SET role in cell survival and response to oxidative stress in HNSCC cell lineages. Materials and Methods: SET protein was analyzed in 372 HNSCC tissue samples by immunohistochemistry using tissue microarray and HNSCC cell lineages. Oxidative stress was induced with the pro-oxidant tert-butylhydroperoxide (50 and 250 mu M) in the HNSCC HN13 cell lineage either with (siSET) or without (siNC) SET knockdown. Cell viability was evaluated by trypan blue exclusion and annexin V/propidium iodide assays. It was assessed caspase-3 and -9, PARP-1, DNA fragmentation, NM23-H1, SET, Akt and phosphorylated Akt (p-Akt) status. Acidic vesicular organelles (AVOs) were assessed by the acridine orange assay. Glutathione levels and transcripts of antioxidant genes were assayed by fluorometry and real time PCR, respectively. Results: SET levels were up-regulated in 97% tumor tissue samples and in HNSCC cell lineages. SiSET in HN13 cells (i) promoted cell death but did not induced caspases, PARP-1 cleavage or DNA fragmentation, and (ii) decreased resistance to death induced by oxidative stress, indicating SET involvement through caspase-independent mechanism. The red fluorescence induced by siSET in HN13 cells in the acridine orange assay suggests SET-dependent prevention of AVOs acidification. NM23-H1 protein was restricted to the cytoplasm of siSET/siNC HN13 cells under oxidative stress, in association with decrease of cleaved SET levels. In the presence of oxidative stress, siNC HN13 cells showed lower GSH antioxidant defense (GSH/GSSG ratio) but higher expression of the antioxidant genes PRDX6, SOD2 and TXN compared to siSET HN13 cells. Still under oxidative stress, p-Akt levels were increased in siNC HN13 cells but not in siSET HN13, indicating its involvement in HN13 cell survival. Similar results for the main SET effects were observed in HN12 and CAL 27 cell lineages, except that HN13 cells were more resistant to death. Conclusion: SET is potential (i) marker for HNSCC associated with cancer cell resistance and (ii) new target in cancer therapy. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To estimate the pretest probability of Cushing's syndrome (CS) diagnosis by a Bayesian approach using intuitive clinical judgment. MATERIALS AND METHODS: Physicians were requested, in seven endocrinology meetings, to answer three questions: "Based on your personal expertise, after obtaining clinical history and physical examination, without using laboratorial tests, what is your probability of diagnosing Cushing's Syndrome?"; "For how long have you been practicing Endocrinology?"; and "Where do you work?". A Bayesian beta regression, using the WinBugs software was employed. RESULTS: We obtained 294 questionnaires. The mean pretest probability of CS diagnosis was 51.6% (95%CI: 48.7-54.3). The probability was directly related to experience in endocrinology, but not with the place of work. CONCLUSION: Pretest probability of CS diagnosis was estimated using a Bayesian methodology. Although pretest likelihood can be context-dependent, experience based on years of practice may help the practitioner to diagnosis CS. Arq Bras Endocrinol Metab. 2012;56(9):633-7

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES] Las necesidades básicas de las empresas suelen ser las mismas, ya sea una empresa grande que pequeña, la infraestructura sobre la que montan sus procesos de negocio y las aplicaciones para gestionarlos suelen ser casi iguales. Si dividimos la infraestructura TIC de una empresa en hardware, sistema y aplicaciones, podemos ver que en la mayoría de ellas el sistema es casi idéntico. Además, gracias a la virtualización, que ha entrado de manera arrolladora en el mundo de la informática, podemos independizar totalmente el software del hardware, de forma que obtenemos una flexibilidad enorme a la hora de planificar despliegues de infraestructura. Sobre estas dos ideas, uniformidad de sistema e independencia de hardware, son sobre las que se va a desarrollar el siguiente TFG. Para el desarrollo de la primera de ellas se realizará el estudio de la infraestructura básica ( sistema) que cualquier empresa suele tener. Se intentará dar una solución que sea válida para una gran cantidad de empresas de nuestro entorno y se realizará el diseño del mismo. Con la segunda idea desarrollaremos un sistema basado en servicios, que sea lo suficientemente completa para poder dar respuesta a las necesidades vistas pero, a su vez, suficientemente flexible para que el crecimiento en capacidades o servicios se pueda realizar de forma sencilla sin que la estructura del sistema, o sus módulos deban modificarse para realizarlos. Por tanto, vamos a realizar un diseño integral y completa, de forma que será tanto de hardware como de software, haciendo énfasis en la integración de los sistemas y la interrelación entre los distintos elementos de ellos. Se dará, a su vez, la valoración económica del mismo. Por último, y como ejemplo de la flexibilidad del diseño elegido veremos dos modificaciones sobre el diseño original. El primero de ellos será una ampliación para dar mayor seguridad en cuanto a redundancia de almacenamiento y, ya en un paso definitivo, montar un CPD remoto. El segundo de ellos será un diseño de bajo coste, en el que, mantenimiento los mismos servicios, bajaremos el coste del diseño con productos con algo menos de prestaciones, pero manteniendo la solución en conjunto unos altos niveles de calidad y servicio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, consumers became more aware and sensible in respect to environment and food safety matters. They are more and more interested in organic agriculture and markets and tend to prefer ‘organic’ products more than their traditional counterparts. To increase the quality and reduce the cost of production in organic and low-input agriculture, the 6FP-European “QLIF” project investigated the use of natural products such as bio-inoculants. They are mostly composed by arbuscular mycorrhizal fungi and other microorganisms, so-called “plant probiotic” microorganisms (PPM), because they help keeping an high yield, even under abiotic and biotic stressful conditions. Italian laws (DLgs 217, 2006) have recently included them as “special fertilizers”. This thesis focuses on the use of special fertilizers when growing tomatoes with organic methods in open field conditions, and the effects they induce on yield, quality and microbial rhizospheric communities. The primary objective was to achieve a better understanding of how plant-probiotic micro-flora management could buffer future reduction of external inputs, while keeping tomato fruit yield, quality and system sustainability. We studied microbial rhizospheric communities with statistical, molecular and histological methods. This work have demonstrated that long-lasting introduction of inoculum positively affected micorrhizal colonization and resistance against pathogens. Instead repeated introduction of compost negatively affected tomato quality, likely because it destabilized the ripening process, leading to over-ripening and increasing the amount of not-marketable product. Instead. After two years without any significant difference, the third year extreme combinations of inoculum and compost inputs (low inoculum with high amounts of compost, or vice versa) increased mycorrhizal colonization. As a result, in order to reduce production costs, we recommend using only inoculum rather than compost. Secondly, this thesis analyses how mycorrhizal colonization varies in respect to different tomato cultivars and experimental field locations. We found statistically significant differences between locations and between arbuscular colonization patterns per variety. To confirm these histological findings, we started a set of molecular experiments. The thesis discusses preliminary results and recommends their continuation and refinement to gather the complete results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrogen production in the green microalga Chlamydomonas reinhardtii was evaluated by means of a detailed physiological and biotechnological study. First, a wide screening of the hydrogen productivity was done on 22 strains of C. reinhardtii, most of which mutated at the level of the D1 protein. The screening revealed for the first time that mutations upon the D1 protein may result on an increased hydrogen production. Indeed, productions ranged between 0 and more than 500 mL hydrogen per liter of culture (Torzillo, Scoma et al., 2007a), the highest producer (L159I-N230Y) being up to 5 times more performant than the strain cc124 widely adopted in literature (Torzillo, Scoma, et al., 2007b). Improved productivities by D1 protein mutants were generally a result of high photosynthetic capabilities counteracted by high respiration rates. Optimization of culture conditions were addressed according to the results of the physiological study of selected strains. In a first step, the photobioreactor (PBR) was provided with a multiple-impeller stirring system designed, developed and tested by us, using the strain cc124. It was found that the impeller system was effectively able to induce regular and turbulent mixing, which led to improved photosynthetic yields by means of light/dark cycles. Moreover, improved mixing regime sustained higher respiration rates, compared to what obtained with the commonly used stir bar mixing system. As far as the results of the initial screening phase are considered, both these factors are relevant to the hydrogen production. Indeed, very high energy conversion efficiencies (light to hydrogen) were obtained with the impeller device, prooving that our PBR was a good tool to both improve and study photosynthetic processes (Giannelli, Scoma et al., 2009). In the second part of the optimization, an accurate analysis of all the positive features of the high performance strain L159I-N230Y pointed out, respect to the WT, it has: (1) a larger chlorophyll optical cross-section; (2) a higher electron transfer rate by PSII; (3) a higher respiration rate; (4) a higher efficiency of utilization of the hydrogenase; (5) a higher starch synthesis capability; (6) a higher per cell D1 protein amount; (7) a higher zeaxanthin synthesis capability (Torzillo, Scoma et al., 2009). These information were gathered with those obtained with the impeller mixing device to find out the best culture conditions to optimize productivity with strain L159I-N230Y. The main aim was to sustain as long as possible the direct PSII contribution, which leads to hydrogen production without net CO2 release. Finally, an outstanding maximum rate of 11.1 ± 1.0 mL/L/h was reached and maintained for 21.8 ± 7.7 hours, when the effective photochemical efficiency of PSII (ΔF/F'm) underwent a last drop to zero. If expressed in terms of chl (24.0 ± 2.2 µmoles/mg chl/h), these rates of production are 4 times higher than what reported in literature to date (Scoma et al., 2010a submitted). DCMU addition experiments confirmed the key role played by PSII in sustaining such rates. On the other hand, experiments carried out in similar conditions with the control strain cc124 showed an improved final productivity, but no constant PSII direct contribution. These results showed that, aside from fermentation processes, if proper conditions are supplied to selected strains, hydrogen production can be substantially enhanced by means of biophotolysis. A last study on the physiology of the process was carried out with the mutant IL. Although able to express and very efficiently utilize the hydrogenase enzyme, this strain was unable to produce hydrogen when sulfur deprived. However, in a specific set of experiments this goal was finally reached, pointing out that other than (1) a state 1-2 transition of the photosynthetic apparatus, (2) starch storage and (3) anaerobiosis establishment, a timely transition to the hydrogen production is also needed in sulfur deprivation to induce the process before energy reserves are driven towards other processes necessary for the survival of the cell. This information turned out to be crucial when moving outdoor for the hydrogen production in a tubular horizontal 50-liter PBR under sunlight radiation. First attempts with laboratory grown cultures showed that no hydrogen production under sulfur starvation can be induced if a previous adaptation of the culture is not pursued outdoor. Indeed, in these conditions the hydrogen production under direct sunlight radiation with C. reinhardtii was finally achieved for the first time in literature (Scoma et al., 2010b submitted). Experiments were also made to optimize productivity in outdoor conditions, with respect to the light dilution within the culture layers. Finally, a brief study of the anaerobic metabolism of C. reinhardtii during hydrogen oxidation has been carried out. This study represents a good integration to the understanding of the complex interplay of pathways that operate concomitantly in this microalga.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis main topic is the conflict between disclosure in financial markets and the need for confidentiality of the firm. After a recognition of the major dynamics of information production and dissemination in the stock market, the analysis moves to the interactions between the information that a firm is tipically interested in keeping confidential, such as trade secrets or the data usually covered by patent protection, and the countervailing demand for disclosure arising from finacial markets. The analysis demonstrates that despite the seeming divergence between informational contents tipically disclosed to investors and information usually covered by intellectual property protection, the overlapping areas are nonetheless wide and the conflict between transparency in financial markets and the firm’s need for confidentiality arises frequently and sistematically. Indeed, the company’s disclosure policy is based on a continuous trade-off between the costs and the benefits related to the public dissemination of information. Such costs are mainly represented by the competitive harm caused by competitors’ access to sensitive data, while the benefits mainly refer to the lower cost of capital that the firm obtains as a consequence of more disclosure. Secrecy shields the value of costly produced information against third parties’ free riding and constitutes therefore a means to protect the firm’s incentives toward the production of new information and especially toward technological and business innovation. Excessively demanding standards of transparency in financial markets might hinder such set of incentives and thus jeopardize the dynamics of innovation production. Within Italian securities regulation, there are two sets of rules mostly relevant with respect to such an issue: the first one is the rule that mandates issuers to promptly disclose all price-sensitive information to the market on an ongoing basis; the second one is the duty to disclose in the prospectus all the information “necessary to enable investors to make an informed assessment” of the issuers’ financial and economic perspectives. Both rules impose high disclosure standards and have potentially unlimited scope. Yet, they have safe harbours aimed at protecting the issuer need for confidentiality. Despite the structural incompatibility between public dissemination of information and the firm’s need to keep certain data confidential, there are certain ways to convey information to the market while preserving at the same time the firm’s need for confidentality. Such means are insider trading and selective disclosure: both are based on mechanics whereby the process of price reaction to the new information takes place without any corresponding activity of public release of data. Therefore, they offer a solution to the conflict between disclosure and the need for confidentiality that enhances market efficiency and preserves at the same time the private set of incentives toward innovation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Delirium is defined as an acute disorder of attention and cognition. Delirium is common in hospitalized elderly patient and is associated with increased morbidity, length of stay and patient care costs. Although Delirium can develop at any time during hospitalization, it typically presents early in the post-operative period (Post-Operative Delirium, POD) in the surgery context. The molecular mechanism and possible genetics basis of POD onset are not known, as well as all the risk factors are not completely defined. Our hypothesis is that genetic risk factor involving the inflammatory response could have possible effects on the immunoneuroendocrine system. Moreover, our previous data (inflamm-aging) suggest that aging is associated with an increase of inflammatory status, favouring age-related diseases such as neurodegenerative diseases, frailty, depression among other. Some pro-inflammatory or anti-inflammatory cytokines, seem to play a crucial role in increasing the inflammatory status and in the communication and regulation of immunoneuroendocrine system. Objective: this study evaluated the incidence of POD in elderly patients undergoing general surgery, clinical/physical and psychological risk factors of POD insurgency and investigated inflammatory and genetic risk factors. Moreover, this study evaluated the consequence of POD in terms of institutionalization, development of permanent cognitive dysfunction or dementia and mortality Methods: patients aged over 65 admitted for surgery at the Urgency Unit of S.Orsola-Malpighi Hospital were eligible for this case–control study. Risk factors significantly associated with POD in univariate analysis were entered into multivariate analysis to establish those independently associated with POD. Preoperative plasma level of 9 inflammatory markers were measured in 42 control subjects and 43 subjects who developed POD. Functional polymorphisms of IL-1 α , IL-2, IL-6, IL-8, IL-10 and TNF-alpha cytokine genes were determined in 176 control subjects and 27 POD subjects. Results: A total of 351 patients were enrolled in the study. The incidence of POD was 13•2 %. Independent variables associated with POD were: age, co-morbidity, preoperative cognitive impairment, glucose abnormalities. Median length of hospital stay was 21 days for patients with POD versus 8 days for control patients (P < 0•001). The hospital mortality rate was 19 and 8•4 % respectively (P = 0•021) and mortality rate after 1 year was also higher in POD (P= 0.0001). The baseline of IL-6 concentration was higher in POD patients than patients without POD, whereas IL-2 was lower in POD patients compared to patients without POD. In a multivariate analysis only IL-6 remained associated with POD. Moreover IL-6, IL-8 and IL-2 are associated with co-morbidity, intra-hospital mortality, compromised functional status and emergency admission. No significant differences in genotype distribution were found between POD subjects and controls for any SNP analyzed in this study. Conclusion: In this study we found older age, comorbidity, cognitive impairment, glucose abnormalities and baseline of IL-6 as independent risk factors for the development of POD. IL-6 could be proposed as marker of a trait that is associated with an increased risk of delirium; i.e. raised premorbid IL-6 level predict for the development of delirium.