925 resultados para open-circuit potential transients
Resumo:
Programa de doctorado de oceanografía
Resumo:
A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.
Resumo:
In this Thesis we consider a class of second order partial differential operators with non-negative characteristic form and with smooth coefficients. Main assumptions on the relevant operators are hypoellipticity and existence of a well-behaved global fundamental solution. We first make a deep analysis of the L-Green function for arbitrary open sets and of its applications to the Representation Theorems of Riesz-type for L-subharmonic and L-superharmonic functions. Then, we prove an Inverse Mean value Theorem characterizing the superlevel sets of the fundamental solution by means of L-harmonic functions. Furthermore, we establish a Lebesgue-type result showing the role of the mean-integal operator in solving the homogeneus Dirichlet problem related to L in the Perron-Wiener sense. Finally, we compare Perron-Wiener and weak variational solutions of the homogeneous Dirichlet problem, under specific hypothesis on the boundary datum.
Resumo:
Synthetic biology has shown that the metabolic behavior of mammalian cells can be altered by genetic devices such as epigenetic and hysteretic switches, timers and oscillators, biocomputers, hormone systems and heterologous metabolic shunts. To explore the potential of such devices for therapeutic strategies, we designed a synthetic mammalian circuit to maintain uric acid homeostasis in the bloodstream, disturbance of which is associated with tumor lysis syndrome and gout. This synthetic device consists of a modified Deinococcus radiodurans-derived protein that senses uric acids levels and triggers dose-dependent derepression of a secretion-engineered Aspergillus flavus urate oxidase that eliminates uric acid. In urate oxidase-deficient mice, which develop acute hyperuricemia, the synthetic circuit decreased blood urate concentration to stable sub-pathologic levels in a dose-dependent manner and reduced uric acid crystal deposits in the kidney. Synthetic gene-network devices providing self-sufficient control of pathologic metabolites represent molecular prostheses, which may foster advances in future gene- and cell-based therapies.
Resumo:
BACKGROUND: Until August 2004 there were 106 forensic cases examined with postmortem multislice computed tomography (MSCT) and magnetic resonance (MR) imaging before traditional autopsy within the Virtopsy project. Intrahepatic gas (IHG) was a frequent finding in postmortem MSCT examinations. The aim of this study was to investigate its cause and significance. METHODS: There were 84 virtopsy cases retrospectively investigated concerning the occurrence, location, and volume of IHG in postmortem MSCT imaging (1.25 mm collimation, 1.25 mm thickness). We assessed and noted the occurrence of intestinal distention, putrefaction, and systemic gas embolisms and the cause of death, possible open trauma, possible artificial respiration, and the postmortem interval. We investigated the relations between the findings using the contingency table (chi2 test) and the comparison of the postmortem intervals in both groups was performed using the t test in 79 nonputrefied corpses. RESULTS: IHG was found in 47 cases (59.5%). In five of the cases, the IHG was caused or influenced by putrefaction. Gas distribution within the liver of the remaining 42 cases was as follows: hepatic arteries in 21 cases, hepatic veins in 35 cases, and portal vein branches in 13 cases; among which combinations also occurred in 20 cases. The presence of IHG was strongly related to open trauma with systemic gas. Pulmonary barotrauma as occurring under artificial respiration or in drowning also caused IHG. Putrefaction did not seem to influence the occurrence of IHG until macroscopic signs of putrefaction were noticeable. CONCLUSIONS: IHG is a frequent finding in traumatic causes of death and requires a systemic gas embolism. Exceptions are putrefied or burned corpses. Common clinical causes such as necrotic bowel diseases appear rarely as a cause of IHG in our forensic case material.
Resumo:
Studies are suggesting that hurricane hazard patterns (e.g. intensity and frequency) may change as a consequence of the changing global climate. As hurricane patterns change, it can be expected that hurricane damage risks and costs may change as a result. This indicates the necessity to develop hurricane risk assessment models that are capable of accounting for changing hurricane hazard patterns, and develop hurricane mitigation and climatic adaptation strategies. This thesis proposes a comprehensive hurricane risk assessment and mitigation strategies that account for a changing global climate and that has the ability of being adapted to various types of infrastructure including residential buildings and power distribution poles. The framework includes hurricane wind field models, hurricane surge height models and hurricane vulnerability models to estimate damage risks due to hurricane wind speed, hurricane frequency, and hurricane-induced storm surge and accounts for the timedependant properties of these parameters as a result of climate change. The research then implements median insured house values, discount rates, housing inventory, etc. to estimate hurricane damage costs to residential construction. The framework was also adapted to timber distribution poles to assess the impacts climate change may have on timber distribution pole failure. This research finds that climate change may have a significant impact on the hurricane damage risks and damage costs of residential construction and timber distribution poles. In an effort to reduce damage costs, this research develops mitigation/adaptation strategies for residential construction and timber distribution poles. The costeffectiveness of these adaptation/mitigation strategies are evaluated through the use of a Life-Cycle Cost (LCC) analysis. In addition, a scenario-based analysis of mitigation strategies for timber distribution poles is included. For both residential construction and timber distribution poles, adaptation/mitigation measures were found to reduce damage costs. Finally, the research develops the Coastal Community Social Vulnerability Index (CCSVI) to include the social vulnerability of a region to hurricane hazards within this hurricane risk assessment. This index quantifies the social vulnerability of a region, by combining various social characteristics of a region with time-dependant parameters of hurricanes (i.e. hurricane wind and hurricane-induced storm surge). Climate change was found to have an impact on the CCSVI (i.e. climate change may have an impact on the social vulnerability of hurricane-prone regions).
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
The use of information technology (IT) in dentistry is far ranging. In order to produce a working document for the dental educator, this paper focuses on those methods where IT can assist in the education and competence development of dental students and dentists (e.g. e-learning, distance learning, simulations and computer-based assessment). Web pages and other information-gathering devices have become an essential part of our daily life, as they provide extensive information on all aspects of our society. This is mirrored in dental education where there are many different tools available, as listed in this report. IT offers added value to traditional teaching methods and examples are provided. In spite of the continuing debate on the learning effectiveness of e-learning applications, students request such approaches as an adjunct to the traditional delivery of learning materials. Faculty require support to enable them to effectively use the technology to the benefit of their students. This support should be provided by the institution and it is suggested that, where possible, institutions should appoint an e-learning champion with good interpersonal skills to support and encourage faculty change. From a global prospective, all students and faculty should have access to e-learning tools. This report encourages open access to e-learning material, platforms and programs. The quality of such learning materials must have well defined learning objectives and involve peer review to ensure content validity, accuracy, currency, the use of evidence-based data and the use of best practices. To ensure that the developers' intellectual rights are protected, the original content needs to be secure from unauthorized changes. Strategies and recommendations on how to improve the quality of e-learning are outlined. In the area of assessment, traditional examination schemes can be enriched by IT, whilst the Internet can provide many innovative approaches. Future trends in IT will evolve around improved uptake and access facilitated by the technology (hardware and software). The use of Web 2.0 shows considerable promise and this may have implications on a global level. For example, the one-laptop-per-child project is the best example of what Web 2.0 can do: minimal use of hardware to maximize use of the Internet structure. In essence, simple technology can overcome many of the barriers to learning. IT will always remain exciting, as it is always changing and the users, whether dental students, educators or patients are like chameleons adapting to the ever-changing landscape.
Resumo:
Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished. Prediction of radiated fields from transmission lines has not previously been studied from a panoptical power system perspective. The application of BPL technologies to overhead transmission lines would benefit greatly from an ability to simulate real power system environments, not limited to the transmission lines themselves. Presently circuitbased transmission line models used by EMTP-type programs utilize Carson’s formula for a waveguide parallel to an interface. This formula is not valid for calculations at high frequencies, considering effects of earth return currents. This thesis explains the challenges of developing such improved models, explores an approach to combining circuit-based and electromagnetics modeling to predict radiated fields from transmission lines, exposes inadequacies of simulation tools, and suggests methods of extending the validity of transmission line models into very high frequency ranges. Electromagnetics programs are commonly used to study radiated fields from transmission lines. However, an approach is proposed here which is also able to incorporate the components of a power system through the combined use of EMTP-type models. Carson’s formulas address the series impedance of electrical conductors above and parallel to the earth. These equations have been analyzed to show their inherent assumptions and what the implications are. Additionally, the lack of validity into higher frequencies has been demonstrated, showing the need to replace Carson’s formulas for these types of studies. This body of work leads to several conclusions about the relatively new study of BPL. Foremost, there is a gap in modeling capabilities which has been bridged through integration of circuit-based and electromagnetics modeling, allowing more realistic prediction of BPL performance and radiated fields. The proposed approach is limited in its scope of validity due to the formulas used by EMTP-type software. To extend the range of validity, a new set of equations must be identified and implemented in the approach. Several potential methods of implementation have been explored. Though an appropriate set of equations has not yet been identified, further research in this area will benefit from a clear depiction of the next important steps and how they can be accomplished.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
The push for improved fuel economy and reduced emissions has led to great achievements in engine performance and control. These achievements have increased the efficiency and power density of gasoline engines dramatically in the last two decades. With the added power density, thermal management of the engine has become increasingly important. Therefore it is critical to have accurate temperature and heat transfer models as well as data to validate them. With the recent adoption of the 2025 Corporate Average Fuel Economy(CAFE) standard, there has been a push to improve the thermal efficiency of internal combustion engines even further. Lean and dilute combustion regimes along with waste heat recovery systems are being explored as options for improving efficiency. In order to understand how these technologies will impact engine performance and each other, this research sought to analyze the engine from both a 1st law energy balance perspective, as well as from a 2nd law exergy analysis. This research also provided insights into the effects of various parameters on in-cylinder temperatures and heat transfer as well as provides data for validation of other models. It was found that the engine load was the dominant factor for the energy distribution, with higher loads resulting in lower coolant heat transfer and higher brake work and exhaust energy. From an exergy perspective, the exhaust system provided the best waste heat recovery potential due to its significantly higher temperatures compared to the cooling circuit. EGR and lean combustion both resulted in lower combustion chamber and exhaust temperatures; however, in most cases the increased flow rates resulted in a net increase in the energy in the exhaust. The exhaust exergy, on the other hand, was either increased or decreased depending on the location in the exhaust system and the other operating conditions. The effects of dilution from lean operation and EGR were compared using a dilution ratio, and the results showed that lean operation resulted in a larger increase in efficiency than the same amount of dilution with EGR. Finally, a method for identifying fuel spray impingement from piston surface temperature measurements was found. Note: The material contained in this section is planned for submission as part of a journal article and/or conference paper in the future.
Resumo:
This Ph.D. research is comprised of three major components; (i) Characterization study to analyze the composition of defatted corn syrup (DCS) from a dry corn mill facility (ii) Hydrolysis experiments to optimize the production of fermentable sugars and amino acid platform using DCS and (iii) Sustainability analyses. Analyses of DCS included total solids, ash content, total protein, amino acids, inorganic elements, starch, total carbohydrates, lignin, organic acids, glycerol, and presence of functional groups. Total solids content was 37.4% (± 0.4%) by weight, and the mass balance closure was 101%. Total carbohydrates [27% (± 5%) wt.] comprised of starch (5.6%), soluble monomer carbohydrates (12%) and non-starch carbohydrates (10%). Hemicellulose components (structural and non-structural) were; xylan (6%), xylose (1%), mannan (1%), mannose (0.4%), arabinan (1%), arabinose (0.4%), galatactan (3%) and galactose (0.4%). Based on the measured physical and chemical components, bio-chemical conversion route and subsequent fermentation to value added products was identified as promising. DCS has potential to serve as an important fermentation feedstock for bio-based chemicals production. In the sugar hydrolysis experiments, reaction parameters such as acid concentration and retention time were analyzed to determine the optimal conditions to maximize monomer sugar yields while keeping the inhibitors at minimum. Total fermentable sugars produced can reach approximately 86% of theoretical yield when subjected to dilute acid pretreatment (DAP). DAP followed by subsequent enzymatic hydrolysis was most effective for 0 wt% acid hydrolysate samples and least efficient towards 1 and 2 wt% acid hydrolysate samples. The best hydrolysis scheme DCS from an industry's point of view is standalone 60 minutes dilute acid hydrolysis at 2 wt% acid concentration. The combined effect of hydrolysis reaction time, temperature and ratio of enzyme to substrate ratio to develop hydrolysis process that optimizes the production of amino acids in DCS were studied. Four key hydrolysis pathways were investigated for the production of amino acids using DCS. The first hydrolysis pathway is the amino acid analysis using DAP. The second pathway is DAP of DCS followed by protein hydrolysis using proteases [Trypsin, Pronase E (Streptomyces griseus) and Protex 6L]. The third hydrolysis pathway investigated a standalone experiment using proteases (Trypsin, Pronase E, Protex 6L, and Alcalase) on the DCS without any pretreatment. The final pathway investigated the use of Accellerase 1500® and Protex 6L to simultaneously produce fermentable sugars and amino acids over a 24 hour hydrolysis reaction time. The 3 key objectives of the techno-economic analysis component of this PhD research included; (i) Development of a process design for the production of both the sugar and amino acid platforms with DAP using DCS (ii) A preliminary cost analysis to estimate the initial capital cost and operating cost of this facility (iii) A greenhouse gas analysis to understand the environmental impact of this facility. Using Aspen Plus®, a conceptual process design has been constructed. Finally, both Aspen Plus Economic Analyzer® and Simapro® sofware were employed to conduct the cost analysis as well as the carbon footprint emissions of this process facility respectively. Another section of my PhD research work focused on the life cycle assessment (LCA) of commonly used dairy feeds in the U.S. Greenhouse gas (GHG) emissions analysis was conducted for cultivation, harvesting, and production of common dairy feeds used for the production of dairy milk in the U.S. The goal was to determine the carbon footprint [grams CO2 equivalents (gCO2e)/kg of dry feed] in the U.S. on a regional basis, identify key inputs, and make recommendations for emissions reduction. The final section of my Ph.D. research work was an LCA of a single dairy feed mill located in Michigan, USA. The primary goal was to conduct a preliminary assessment of dairy feed mill operations and ultimately determine the GHG emissions for 1 kilogram of milled dairy feed.
Resumo:
Creating Lakes from Open Pit Mines: Processes and Considerations, Emphasis on Northern Environments. This document summarizes the literature of mining pit lakes (through 2007), with a particular focus on issues that are likely to be of special relevance to the creation and management of pit lakes in northern climates. Pit lakes are simply waterbodies formed by filling the open pit left upon the completion of mining operations with water. Like natural lakes, mining pit lakes display a huge diversity in each of these subject areas. However, pit lakes are young and therefore are typically in a non-equilibrium state with respect to their rate of filling, water quality, and biology. Separate sections deal with different aspects of pit lakes, including their morphometry, geology, hydrogeology, geochemistry, and biology. Depending on the type and location of the mine, there may be opportunities to enhance the recreational or ecological benefits of a given pit lake, for example, by re-landscaping and re-vegetating the shoreline, by adding engineered habitat for aquatic life, and maintaining water quality. The creation of a pit lake may be a regulatory requirement to mitigate environmental impacts from mining operations, and/or be included as part of a closure and reclamation plan. Based on published case studies of pit lakes, large-scale bio-engineering projects have had mixed success. A common consensus is that manipulation of pit lake chemistry is difficult, expensive, and takes many years to achieve remediation goals. For this reason, it is prudent to take steps throughout mine operation to reduce the likelihood of future water quality problems upon closure. Also, it makes sense to engineer the lake in such a way that it will achieve its maximal end-use potential, whether it be permanent and safe storage of mine waste, habitat for aquatic life, recreation, or water supply.
Resumo:
Social work at global levels, and across international and intercultural divides, is probably more important now than ever before in our history. It may be that the very form our ideas about intercultural work take need to be re-examined in the light of recent global changes and uncertainties. In this short position paper I wish to offer some considerations about how we might approach the field of intercultural social work in order to gain new insights about how we practise at both local and global levels. For me, much of the promise of an intercultural social work (and for the purposes of this paper I see aspects of international social work in much the same light) lies in its focus on the way we categorise ourselves, our ideas and experiences in relation to others. The very notion of intercultural or international social work is based on assumptions about boundaries, differences, ways of differentiating and defining sets of experiences. Whether these are deemed "cultural" or "national" is of less importance. Once we are forced to examine these assumptions, about how and why we categorise ourselves in relation to other people in particular ways, the way is opened up for us to be much more critical about the bases of our own, often very deep-seated, thinking. This understanding, about how and why notions of "difference" operate in the way they do, can potentially open our understanding to all the other ways, besides cultural or national labelling, in which we categorise and create differences between ourselves and others. Intercultural social work, taken as a potential site for understanding the creation of difference then, has the potential to help us critically examine the bases of much of our practice in any setting, since most practice involves some kind of categorisation of phenomena.
Resumo:
Today, Digital Systems and Services for Technology Supported Learning and Education are recognized as the key drivers to transform the way that individuals, groups and organizations “learn” and the way to “assess learning” in 21st Century. These transformations influence: Objectives - moving from acquiring new “knowledge” to developing new and relevant “competences”; Methods – moving from “classroom” based teaching to “context-aware” personalized learning; and Assessment – moving from “life-long” degrees and certifications to “on-demand” and “in-context” accreditation of qualifications. Within this context, promoting Open Access to Formal and Informal Learning, is currently a key issue in the public discourse and the global dialogue on Education, including Massive Open Online Courses (MOOCs) and Flipped School Classrooms. This volume on Digital Systems for Open Access to Formal and Informal Learning contributes to the international dialogue between researchers, technologists, practitioners and policy makers in Technology Supported Education and Learning. It addresses emerging issues related with both theory and practice, as well as, methods and technologies that can support Open Access to Formal and Informal Learning. In the twenty chapters contributed by international experts who are actively shaping the future of Educational Technology around the world, topics such as: - The evolution of University Open Courses in Transforming Learning - Supporting Open Access to Teaching and Learning of People with Disabilities - Assessing Student Learning in Online Courses - Digital Game-based Learning for School Education - Open Access to Virtual and Remote Labs for STEM Education - Teachers’ and Schools’ ICT Competence Profiling - Web-Based Education and Innovative Leadership in a K-12 International School Setting are presented. An in-depth blueprint of the promise, potential, and imminent future of the field, Digital Systems for Open Access to Formal and Informal Learning is necessary reading for researchers and practitioners, as well as, undergraduate and postgraduate students, in educational technology.