933 resultados para point of view
Resumo:
This study describes the sociolinguistic situation of the indigenous Hungarian national minorities in Slovakia (c. 600,000), Ukraine (c. 180,000), Romania (c. 2,000,000), Yugoslavia (c. 300,000), Slovenia (c. 8,000) and Austria (c. 6,000). Following the guidelines of Hans Goebl et al, the historical sociolinguistic portrait of each minority is presented from 1920 through to the mid-1990s. Each country's report includes sections on geography and demography, history, politics, economy, culture and religion, language policy and planning, and language use (domains of minority and/or majority language use, proficiency, attitudes, etc.). The team's findings were presented in the form of 374 pages of manuscripts, articles and tables, written in Hungarian and English. The core of the team's research results lies in the results of an empirical survey designed to study the social characteristics of Hungarian-minority bilingualism in the six project countries, and the linguistic similarities and differences between the six contact varieties of Hungarian and Hungarian in Hungary. The respondents were divided by age, education, and settlement group - city vs. village and local majority vs. local minority. The first thing to be observed is that Hungarian is tending to be spoken less to children than to parents and grandparents, a familiar pattern of language shift. In contact varieties of Hungarian, analytic constructions may be used where monolingual Hungarians would use a more synthetic form. Mr Kontra gives as an example the compound tagdij, which in Standard Hungarian means "membership fee" but which is replaced in contact Hungarian by the two-word phrase tagsagi dij. Another similar example concerns the synthetic verb hegedult "played the violin" and the analytic expression hegedun jatszott. The contrast is especially striking between the Hungarians in the northern Slavic countries, who use the synthetic form frequently, and those in the southern Slavic countries, who mainly use the analytic form. Mr. Kontra notes that from a structural point of view, there is no immediate explanation for this, since Slovak or Ukrainian are as likely to cause interference as is Serbian. He postulates instead that the difference may be attributable to some sociohistoric cause, and points out that the Turkish occupation of what is today Voivodina caused a discontinuity of the Hungarian presence in the region, with the result that Hungarians were resettled in the area only two and a half centuries ago. However, the Hungarians in today's Slovakia and Ukraine have lived together with Slavic peoples continuously for over a millennium. It may be, he suggests, that 250 years of interethnic coexistence is less than is needed for such a contact-induced change to run its course. Next Mr. Kontra moved on to what he terms "mental maps and morphology". In Hungarian, the names of cities and villages take the surface case (eg. Budapest-en "in Budapest") whereas some names denoting Hungarian settlements and all names of foreign cities take the interior case (eg. Tihany-ban "in Tihany" and Boston-ban "in Boston). The role of the semantic feature "foreign" in suffix-choice can be illustrated by such minimal pairs as Velence-n "in Velence, a village in Hungary" versus Velence-ben "in Velence [=Venice], a city in Italy", and Pecs-en "in Pecs, a city in Hungary" vs. Becs-ben "in Becs, ie. Vienna". This Hungarian vs. foreign distinction is often interpreted as "belonging to historical (pre-1920) Hungary" vs. "outside historical Hungary". The distinction is also expressed in the dichotomy "home" vs. "abroad'. The 1920 border changes have had an impact on both majority and minority Hungarians' mental maps, the maps which govern the choice of surface vs. interior cases with placenames. As there is a growing divergence between the mental maps of majority and minority Hungarians, so there will be a growing divergence in their use of the placename suffixes. Two placenames were chosen to scratch the surface of this complex problem: Craiova (a city in Oltenia, Romania) and Kosovo (Hungarian Koszovo) an autonomous region in southeast Yugoslavia. The assumption to be tested was that both placenames would be used with the inessive (interior) suffixes categorically by Hungarians in Hungary, but that the superessive suffix (showing "home") would be used near-categorically by Hungarians in Romania and Yugoslavia (Voivodina). Minority Hungarians in countries other than Romania and Yugoslavia would show no difference from majority Hungarians in Hungary. In fact, the data show that, contrary to expectation, there is considerable variation within Hungary. And although Koszovo is used, as expected, with the "home" suffix by 61% of the informants in Yugoslavia, the same suffix is used by an even higher percentage of the subjects in Slovenia. Mr. Kontra's team suggests that one factor playing a role in this might be the continuance of the former Yugoslav mentality in the Hungarians of Slovenia, at least from the geographical point of view. The contact varieties of Hungarian show important grammatical differences from Hungarian in Hungary. One of these concerns the variable use of Null subjects (the inclusion or exclusion of the subject of the verb). When informants were asked to insert either megkertem or megkertem ot - "I asked her" - into a test sentence, 54.9% of the respondents in the Ukraine inserted the second phrase as opposed to only 27.4% in Hungary. Although Mr. Kontra and his team concentrated more on the differences between Contact Hungarian and Standard Hungarian, they also discovered a number of similarities. One such similarity is demonstrable in the distribution of what Mr. Kontra calls an ongoing syntactic merger in Hungarian in Hungary. This change means effectively that two possibilities merge to form a third. For instance, the two sentences Valoszinuleg kulfoldre fognak koltozni and Valoszinu, hogy kulfoldre fognak koltozni merge to form the new construction Valszinuleg, hogy kulfoldre fognak koltozni ("Probably they will move abroad."). When asked to choose "the most natural" of the sentences, one in four chose the new construction, and a chi-square test shows homogeneity in the sample. In other words, this syntactic change is spreading across the entire Hungarian-speaking region in the Carpathian Basin Mr. Kontra believes that politicians, educators, and other interested parties now have reliable and up-to-date information about each Hungarian minority. An awareness of Hungarian as a pluricentric language is being developed which elevates the status of contact varieties of Hungarian used by the minorities, an essential process, he believes, if minority languages are to be maintained.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
OBJECTIVE: To generate anatomical data on the human middle ear and adjacent structures to serve as a base for the development and optimization of new implantable hearing aid transducers. Implantable middle ear hearing aid transducers, i.e. the equivalent to the loudspeaker in conventional hearing aids, should ideally fit into the majority of adult middle ears and should utilize the limited space optimally to achieve sufficiently high maximal output levels. For several designs, more anatomical data are needed. METHODS: Twenty temporal bones of 10 formalin-fixed adult human heads were scanned by a computed tomography system (CT) using a slide thickness of 0.63 mm. Twelve landmarks were defined and 24 different distances were calculated for each temporal bone. RESULTS: A statistical description of 24 distances in the adult human middle ear which may limit or influence the design of middle ear transducers is presented. Significant inter-individual differences but no significant differences for gender, side, age or degree of pneumatization of the mastoid were found. Distances, which were not analyzed for the first time in this study, were found to be in good agreement with the results of earlier studies. CONCLUSION: A data set describing the adult human middle ear anatomy quantitatively from the point of view of designers of new implantable hearing aid transducers has been generated. In principle, the method employed in this study using standard CT scans could also be used preoperatively to rule out exclusion criteria.
Resumo:
Point-of-care testing (POCT) remains under scrutiny by healthcare professionals because of its ill-tried, young history. POCT methods are being developed by a few major equipment companies based on rapid progress in informatics and nanotechnology. Issues as POCT quality control, comparability with standard laboratory procedures, standardisation, traceability and round robin testing are being left to hospitals. As a result, the clinical and operational benefits of POCT were first evident for patients on the operating table. For the management of cardiovascular surgery patients, POCT technology is an indispensable aid. Improvement of the technology has meant that clinical laboratory pathologists now recognise the need for POCT beyond their high-throughput areas.
Resumo:
An Internet survey demonstrated the existence of problems related to intraoperative tracking camera set-up and alignment. It is hypothesized that these problems are a result of the limited field of view of today's optoelectronic camera systems, which is usually insufficiently large to keep the entire site of surgical action in view during an intervention. A method is proposed to augment a camera's field of view by actively controlling camera orientation, enabling it to track instruments as they are used intraoperatively. In an experimental study, an increase of almost 300% was found in the effective volume in which instruments could be tracked.
Multicentre evaluation of a new point-of-care test for the determination of NT-proBNP in whole blood
Resumo:
BACKGROUND: The Roche CARDIAC proBNP point-of-care (POC) test is the first test intended for the quantitative determination of N-terminal pro-brain natriuretic peptide (NT-proBNP) in whole blood as an aid in the diagnosis of suspected congestive heart failure, in the monitoring of patients with compensated left-ventricular dysfunction and in the risk stratification of patients with acute coronary syndromes. METHODS: A multicentre evaluation was carried out to assess the analytical performance of the POC NT-proBNP test at seven different sites. RESULTS: The majority of all coefficients of variation (CVs) obtained for within-series imprecision using native blood samples was below 10% for both 52 samples measured ten times and for 674 samples measured in duplicate. Using quality control material, the majority of CV values for day-to-day imprecision were below 14% for the low control level and below 13% for the high control level. In method comparisons for four lots of the POC NT-proBNP test with the laboratory reference method (Elecsys proBNP), the slope ranged from 0.93 to 1.10 and the intercept ranged from 1.8 to 6.9. The bias found between venous and arterial blood with the POC NT-proBNP method was < or =5%. All four lots of the POC NT-proBNP test investigated showed excellent agreement, with mean differences of between -5% and +4%. No significant interference was observed with lipaemic blood (triglyceride concentrations up to 6.3 mmol/L), icteric blood (bilirubin concentrations up to 582 micromol/L), haemolytic blood (haemoglobin concentrations up to 62 mg/L), biotin (up to 10 mg/L), rheumatoid factor (up to 42 IU/mL), or with 50 out of 52 standard or cardiological drugs in therapeutic concentrations. With bisoprolol and BNP, somewhat higher bias in the low NT-proBNP concentration range (<175 ng/L) was found. Haematocrit values between 28% and 58% had no influence on the test result. Interference may be caused by human anti-mouse antibodies (HAMA) types 1 and 2. No significant influence on the results with POC NT-proBNP was found using volumes of 140-165 muL. High NT-proBNP concentrations above the measuring range of the POC NT-proBNP test did not lead to false low results due to a potential high-dose hook effect. CONCLUSIONS: The POC NT-proBNP test showed good analytical performance and excellent agreement with the laboratory method. The POC NT-proBNP assay is therefore suitable in the POC setting.
Resumo:
Bioequivalence trials are abbreviated clinical trials whereby a generic drug or new formulation is evaluated to determine if it is "equivalent" to a corresponding previously approved brand-name drug or formulation. In this manuscript, we survey the process of testing bioequivalence and advocate the likelihood paradigm for representing the resulting data as evidence. We emphasize the unique conflicts between hypothesis testing and confidence intervals in this area - which we believe are indicative of the existence of the systemic defects in the frequentist approach - that the likelihood paradigm avoids. We suggest the direct use of profile likelihoods for evaluating bioequivalence and examine the main properties of profile likelihoods and estimated likelihoods under simulation. This simulation study shows that profile likelihoods are a reasonable alternative to the (unknown) true likelihood for a range of parameters commensurate with bioequivalence research. Our study also shows that the standard methods in the current practice of bioequivalence trials offers only weak evidence from the evidential point of view.
Resumo:
BACKGROUND AND OBJECTIVES: Thoracic epidural analgesia (TEA) is increasingly used for perioperative analgesia. If patients with TEA develop sepsis or systemic inflammatory response subsequent to extended surgery the question arises if it would be safe to continue TEA with its beneficial effects of improving gastrointestinal perfusion and augmenting tissue oxygenation. A major concern in this regard is hemodynamic instability that might ensue from TEA-induced vasodilation. The objective of the present study was to assess the effects of TEA on systemic and pulmonary hemodynamics in a sepsis model of hyperdynamic endotoxemia. METHODS: After a baseline measurement in healthy sheep (n = 14), Salmonella thyphosa endotoxin was continuously infused at a rate of 10 ngxkg(-1)xmin(-1) over 16 hours. The surviving animals (n = 12) were then randomly assigned to 1 of 2 study groups. In the treatment group (n = 6), continuous TEA was initiated with 0.1 mLxkg(-1) bupivacaine 0.125% and maintained with 0.1 mLxkg(-1)xh(-1). In the control group (n = 6) the same amount of isotonic sodium saline solution was injected at the same rate through the epidural catheter. RESULTS: In both experimental groups cardiac index increased and systemic vascular resistance decreased concurrently (each P < .05). Functional epidural blockade in the TEA group was confirmed by sustained suppression of the cutaneous (or panniculus) reflex. During the observational period of 6 hours neither systemic nor pulmonary circulatory variables were impaired by TEA. CONCLUSIONS: From a hemodynamic point of view, TEA presents as a safe treatment option in sepsis or systemic inflammatory response syndrome.
Resumo:
For countless communities around the world, acquiring access to safe drinking water is a daily challenge which many organizations endeavor to meet. The villages in the interior of Suriname have been the focus of many improved drinking water projects as most communities are without year-round access. Unfortunately, as many as 75% of the systems in Suriname fail within several years of implementation. These communities, scattered along the rivers and throughout the jungle, lack many of the resources required to sustain a centralized water treatment system. However, the centralized system in the village of Bendekonde on the Upper Suriname River has been operational for over 10 years and is often touted by other communities. The Bendekonde system is praised even though the technology does not differ significantly from other failed systems. Many of the water systems that fail in the interior fail due to a lack of resources available to the community to maintain the system. Typically, the more complex a system becomes, so does the demand for additional resources. Alternatives to centralized systems include technologies such as point-of-use water filters, which can greatly reduce the necessity for outside resources. In particular, ceramic point-of-use water filters offer a technology that can be reasonably managed in a low resource setting such as that in the interior of Suriname. This report investigates the appropriateness and effectiveness of ceramic filters constructed with local Suriname clay and compares the treatment effectiveness to that of the Bendekonde system. Results of this study showed that functional filters could be produced from Surinamese clay and that they were more effective, in a controlled laboratory setting, than the field performance of the Bendekonde system for removing total coliform. However, the Bendekonde system was more successful at removing E. coli. In a life-cycle assessment, ceramic water filters manufactured in Suriname and used in homes for a lifespan of 2 years were shown to have lower cumulative energy demand, as well as lower global warming potential than a centralized system similar to that used in Bendekonde.
Resumo:
From the customer satisfaction point of view, sound quality of any product has become one of the important factors these days. The primary objective of this research is to determine factors which affect the acceptability of impulse noise. Though the analysis is based on a sample impulse sound file of a Commercial printer, the results can be applied to other similar impulsive noise. It is assumed that impulsive noise can be tuned to meet the accepTable criteria. Thus it is necessary to find the most significant factors which can be controlled physically. This analysis is based on a single impulse. A sample impulsive sound file is tweaked for different amplitudes, background noise, attack time, release time and the spectral content. A two level factorial design of experiments (DOE) is applied to study the significant effects and interactions. For each impulse file modified as per the DOE, the magnitude of perceived annoyance is calculated from the objective metric developed recently at Michigan Technological University. This metric is based on psychoacoustic criteria such as loudness, sharpness, roughness and loudness based impulsiveness. Software called ‘Artemis V11.2’ developed by HEAD Acoustics is used to calculate these psychoacoustic terms. As a result of two level factorial analyses, a new objective model of perceived annoyance is developed in terms of above mentioned physical parameters such as amplitudes, background noise, impulse attack time, impulse release time and the spectral content. Also the effects of the significant individual factors as well as two level interactions are also studied. The results show that all the mentioned five factors affect annoyance level of an impulsive sound significantly. Thus annoyance level can be reduced under the criteria by optimizing the levels. Also, an additional analysis is done to study the effect of these five significant parameters on the individual psychoacoustic metrics.
Resumo:
Ensuring water is safe at source and point-of-use is important in areas of the world where drinking water is collected from communal supplies. This report describes a study in rural Mali to determine the appropriateness of assumptions common among development organizations that drinking water will remain safe at point-of-use if collected from a safe (improved) source. Water was collected from ten sources (borehole wells with hand pumps, and hand-dug wells) and forty-five households using water from each source type. Water quality was evaluated seasonally (quarterly) for levels of total coliform, E.coli, and turbidity. Microbial testing was done using the 3M Petrifilm™ method. Turbidity testing was done using a turbidity tube. Microbial testing results were analyzed using statistical tests including Kruskal-Wallis, Mann Whitney, and analysis of variance. Results show that water from hand pumps did not contain total coliform or E.coli and had turbidity under 5 NTUs, whereas water from dug wells had high levels of bacteria and turbidity. However water at point-of-use (household) from hand pumps showed microbial contamination - at times being indistinguishable from households using dug wells - indicating a decline in water quality from source to point-of-use. Chemical treatment at point-of-use is suggested as an appropriate solution to eliminating any post-source contamination. Additionally, it is recommended that future work be done to modify existing water development strategies to consider water quality at point-of-use.
Resumo:
How do prevailing narratives about Native Americans, particularly in the medium of film, conspire to promote the perspective of the dominant culture? What makes the appropriation of Indigenous images so metaphorically popular? In the past hundred years, little has changed in the forms of representation favored by Hollywood. The introductory chapter elucidates the problem and outlines the scope of this study. As each subsequent chapter makes clear, the problem is as relevant today as it has been throughout the entire course of filmic history. Chapter Two analyzes representational trends and defines each decade according to its favorite stereotype. The binary of the bloodthirsty savage is just as prevalent as it was during the 1920s and 30s. The same holds true for the drunken scapegoat and the exotic maiden, which made their cinematic debuts in the 1940s and 50s. But Hollywood has added new types as well. The visionary peacemaker and environmental activist have also made an appearance within the last forty years. What matters most is not the realism of these images, but rather the purposes to which they can be put toward validating whatever concerns the majority filmmakers wish to promote. Whether naïvely or not, such representations continue to evacuate Indigenous agency to the advantage of the majority. A brief historical overview confirms this legacy. Various disciplines have sought to interrogate this problem. Chapter three investigates the field of postcolonial studies, which makes inquiry into the various ways these narratives are produced, marketed, and consumed. It also raises the key questions of for whom, and by whom, these narratives are constructed. Additional consideration is given to their value as commodities in the mass marketplace. Typically the products of a boutique-multiculturalism, their storylines are apt to promote the prevailing point of view. Critical theory provides a foundational framework for chapter four. What is the blockbuster formula and how do the instruments of capital promote it? Concepts such as culture industry and repressive tolerance examine both the function and form of the master narrative, as well as its use to control the avenues of dissent. Moreover, the public sphere and its diminishment highlight the challenges inherent in the widespread promotion of an alternative set of narratives. Nonetheless, challenges to prevailing narratives do exist, particularly in the form of Trickster narratives. Often subject to persistent misrecognition, the Trickster demonstrates a potent form of agency that undeniably dismantles the hegemony of Western cinema. The final chapter examines some of the Trickster's more subtle and obscure productions. Usually subjugated to the realm of the mystical, rather than the mythical, these misinterpreted forms have the power to speak in circles around a majority audience. Intended for an Other audience, they are coded in a language that delivers a type of direction through indirection, promoting a poignant agency all their own.
Resumo:
This Ph.D. research is comprised of three major components; (i) Characterization study to analyze the composition of defatted corn syrup (DCS) from a dry corn mill facility (ii) Hydrolysis experiments to optimize the production of fermentable sugars and amino acid platform using DCS and (iii) Sustainability analyses. Analyses of DCS included total solids, ash content, total protein, amino acids, inorganic elements, starch, total carbohydrates, lignin, organic acids, glycerol, and presence of functional groups. Total solids content was 37.4% (± 0.4%) by weight, and the mass balance closure was 101%. Total carbohydrates [27% (± 5%) wt.] comprised of starch (5.6%), soluble monomer carbohydrates (12%) and non-starch carbohydrates (10%). Hemicellulose components (structural and non-structural) were; xylan (6%), xylose (1%), mannan (1%), mannose (0.4%), arabinan (1%), arabinose (0.4%), galatactan (3%) and galactose (0.4%). Based on the measured physical and chemical components, bio-chemical conversion route and subsequent fermentation to value added products was identified as promising. DCS has potential to serve as an important fermentation feedstock for bio-based chemicals production. In the sugar hydrolysis experiments, reaction parameters such as acid concentration and retention time were analyzed to determine the optimal conditions to maximize monomer sugar yields while keeping the inhibitors at minimum. Total fermentable sugars produced can reach approximately 86% of theoretical yield when subjected to dilute acid pretreatment (DAP). DAP followed by subsequent enzymatic hydrolysis was most effective for 0 wt% acid hydrolysate samples and least efficient towards 1 and 2 wt% acid hydrolysate samples. The best hydrolysis scheme DCS from an industry's point of view is standalone 60 minutes dilute acid hydrolysis at 2 wt% acid concentration. The combined effect of hydrolysis reaction time, temperature and ratio of enzyme to substrate ratio to develop hydrolysis process that optimizes the production of amino acids in DCS were studied. Four key hydrolysis pathways were investigated for the production of amino acids using DCS. The first hydrolysis pathway is the amino acid analysis using DAP. The second pathway is DAP of DCS followed by protein hydrolysis using proteases [Trypsin, Pronase E (Streptomyces griseus) and Protex 6L]. The third hydrolysis pathway investigated a standalone experiment using proteases (Trypsin, Pronase E, Protex 6L, and Alcalase) on the DCS without any pretreatment. The final pathway investigated the use of Accellerase 1500® and Protex 6L to simultaneously produce fermentable sugars and amino acids over a 24 hour hydrolysis reaction time. The 3 key objectives of the techno-economic analysis component of this PhD research included; (i) Development of a process design for the production of both the sugar and amino acid platforms with DAP using DCS (ii) A preliminary cost analysis to estimate the initial capital cost and operating cost of this facility (iii) A greenhouse gas analysis to understand the environmental impact of this facility. Using Aspen Plus®, a conceptual process design has been constructed. Finally, both Aspen Plus Economic Analyzer® and Simapro® sofware were employed to conduct the cost analysis as well as the carbon footprint emissions of this process facility respectively. Another section of my PhD research work focused on the life cycle assessment (LCA) of commonly used dairy feeds in the U.S. Greenhouse gas (GHG) emissions analysis was conducted for cultivation, harvesting, and production of common dairy feeds used for the production of dairy milk in the U.S. The goal was to determine the carbon footprint [grams CO2 equivalents (gCO2e)/kg of dry feed] in the U.S. on a regional basis, identify key inputs, and make recommendations for emissions reduction. The final section of my Ph.D. research work was an LCA of a single dairy feed mill located in Michigan, USA. The primary goal was to conduct a preliminary assessment of dairy feed mill operations and ultimately determine the GHG emissions for 1 kilogram of milled dairy feed.